Jacques-André Landry
ResearchProjectsCurriculum vitaeCoursesHome 

Some current projects:


The following paragraphs briefly describe some of the ongoing projects under my supervision.

Development of a multi-agent model to facilitate the sustainable management of boat traffic in the Saguenay-St. Lawrence Marine Park and Marine Protected Area in Quebec.
(Clément Chion and Philippe Lamontagne)
NSERC strategic project in collaboration with Parks Canada, Fisheries and Oceans, le Groupe de recherche et d'éducation sur les mammifères marins (GREMM), l'École de Technologie Supérieure (Montréal) and the University of Calgary. The objective is to develop an agent-based model to simulate the movement of marine traffic (whale watching vessels, commercial shipping traffic, pleasure craft, kayaks, etc.) and marine mammals in the Saguenay-St. Lawrence Marine Park and the proposed adjacent Marine Protected Area in order to investigate the effects of different management scenarios on spatiotemporal patterns of traffic circulation. Cristiane A. Martins (Ph.D. student), Clément Chion (Ph.D. student), Philippe Lamontagne (M.Sc. student) and Samuel Turgeon (B.Sc. honors student) are currently working on this three-year project.

Relationship between complexity and ecological integrity in coral reef ecosystems (Jonathan Bouchard)
In this collaborative project between the Marine Science Institute, University of the Philipinnes, the Coral Reef Ecology Working Group, Ludwig-Maximilians Universität, Munich, the Munich Systems Biology Network and l'École de technologie supérieure, we are applying spatial measures of complexity to underwater images of coral reef ecosystems as a means of characterising their ecological integrity. Our group is interested in the extraction of information from digital images using advanced computer vision techniques and data mining.

ARTIFICIAL INTELLIGENCE TECHNIQUES FOR BIOLOGICAL OBJECT CLASSIFICATION IN BIDIMENSIONAL IMAGES. (LEVASSEUR, Yan)
The visual recognition of biological objects like food industry products and plant species in natural environment benefited from major openings during the last 30 years. Today, powerful recognition algorithms are used to evaluate the quality of food productions and to monitor ecosystems to ensure its protection. In the majority of the cases, vision and data processing experts developed customized solutions which allowed reaching the desired results.  The goal of this research is to provide recommendations for the development of a generic object recognition system (from images) which would require as little human intervention as possible. Such an algorithm could be used by non experts such as the industrial engineers, botanists and biologists. To achieve our goal, we studied the stages of the recognition process starting from images.  In practice, we set up a system for segmentation, feature extraction and classification. In addition, we developed a Genetic Programming (GP) classifier. We integrated the GP algorithm to the free and open source data-mining software Weka to support collaborative research efforts in evolutionary computing. Six different classifiers were used for our experiments. They are naïve Bayes, C4.5 decision tree, K Nearest Neighbour (KNN), GP, Support Vector Machine (SVM) and the Multilayered Perceptron (MP). In a second round of experiments, we combined all classifiers (except for KNN) using the boosting meta-algorithm.  We compared the classification results from the six algorithms for six distinct databases of which we created three. The databases contain information extracted from images of cereals, pollen, wood knots, raisins, leaves and computer characters.  We automatically segmented the majority of the images. We then extracted around 40 features from each object. Afterwards, we transformed the feature set using Principal Component Analysis (PCA). Finally, we compiled the classification results of the six classifiers, then of their combination with boosting for the basic feature set and for the transformed set. Each experiment was carried out 50 times, with a random separation of the training and test databases.  We observed good recognition rates for problems comprising a large number of training samples. The order of the classifiers, according to their median error rate, is consistent for the majority of the problems. The MP and the SVM generally obtain the best classification rates. For problems containing a large number of samples, our system obtained encouraging results. In spite of the apparent superiority of some classifiers, the experiments do not enable us to put forth a recommendation on the priority use of a specific classifier in all cases. We rather suggest the use of a evolutionary meta-heuristic for the analysis of a problem’s data in order to choose or to combine suitable classifiers. We also put forward that our system’s classification performance could be improved by the addition of new relevant features and by the optimization of the classifiers’ parameters according to the data.

3D-MODEL SYNTHESIS FROM 2D PARTIAL INFORMATION : A NATURAL PLANT APPLICATION
(Luis Eduardo Da Costa)

The analysis of cultivated fields using near remote sensing has been demonstrated as the best method for detecting physiological disorders of the plants in the field. To perform this type of analysis it is important to be able to manipulate the plants virtually using computer models faithfully representing them; in this thesis, a method is proposed (from the definition of a formalism to the design and test of an algorithm) for generating the models from field 2D photographes. The formalism chosen as the base for plants representation is called Lindenmayer Systems (L-Systems) ; L-Systems are grammatical systems controlled by an initial condition and one or more rewriting rules, and the repetitive iteration of an L-System often produces interesting emergent behavior. However, it is difficult to discover the rules that produce a specific desired behavior in this formalism; this problem is called the ”inverse” problem for Lindenmayer systems. Generating a computer model of a plant is equivalent to solving the inverse problem for a special subtype of this formalism, called ”bracketed Lindenma- yer systems” ; this paper demonstrates the possibility of solving the inverse problem for bracketed Lindenmayer systems by means of an evolutionary algorithm. A detailed description of the algorithm, along with the justification of the chosen design, are presented ; a set of experiments, intended to test the correctness of the method, show that the algorithm explores in a satisfactory manner the space of candidate solutions, and that the approximations it proposes are adequate in most cases. Its limitations and weak- nesses are also reported ; we discuss them and outline our future work.

FEATURE CONSTRUCTION BY GENETIC PROGRAMMING FOR A MULTICLASS RECOGNITION SYSTEM.
(Bourgoin, Brice)

The goal of this research is to optimize the automated recognition of objects in computer vision or remote sensing applications. The premise is that classifiers are sensitive to the data representation space and that a reorganization of this space could improve the performance of some of them. We aim at two goals: to propose a framework for a system requiring the least possible human interventions at the time of its installation and to minimize the absolute error rate.  For that, we have used a Genetic Programming algorithm with coevolution. Its objective was to build a new set of characteristics being based on its potential for classification according to its closest neighbours. Then this set of characteristics was tested on several types of classifiers: closer neighbours, artificial neurons networks and support vector machines. In order to better target his research, we preferred to restrict the Genetic Programming algorithm at the reorganization of the representation space than to generate a complete classifier. Thus, we hoped to benefit from the force of advanced classifier such as support vector machines to prevent the Genetic Programming algorithm from reinventing what is already known. The algorithm had for only objective to concentrate on what is sometimes a weakness in a classification system: the data representation space.  We have used two completely distinct data bases: the first containing handwritten digits, the second concerned with the differentiation of cereals such as barley, corn or oats. The first base contains ten classes, the second seven. Thus, they are real problems of computer vision and strongly multiclass systems. In addition to confirming the results, the interest in using two bases was to highlight the reduced need for human interventions in the initial setup of a classification system. Indeed, for the second base, we have used exactly the same parameters as those selected for the first: these internal parameters of our algorithm claim to be rather universal.  Several simulations allowed us to observe good performances with new space representations for whatever final classifier we used. In that, the robustness of the proposed system in reorganizing the representation space seems to offer improved performance when compared to a single classifier. Thus, we demonstrated the possibility of reducing human interventions needed for the system installation.  Moreover, the absolute performances seem to be improved, in particular with the use of a support vector machine downstream. This improvement was not always huge, but seemed sufficiently promising to pursue our investigation further. Indeed, our approach still offers many ways for improvement, mainly possible thanks to the many possibilities offered by algorithms based on the Genetic Programming paradigm.

 


GENETIC PROGRAMMING APPLIED TO HYPERSPECTRAL IMAGERY FOR BIOPHYSICAL VARIABLE ASSESSMENT WITHIN A LARGE SCALE CULTURE: CASE OF NITROGEN WITHIN A CORNFIELD. (Clément Chion)
One of the main issues of remote sensing is the extraction of relevant information from a data set. Recent development of hyperspectral tools has considerably increased the amount of available data and consequently, new techniques for data mining are required. In precision farming, emergence and democratization of hyperspectral imagery gives rise to great hopes by providing powerful tools to set up more reasonable management. Indeed, spectral properties of plants and their components being well studied, extrapolation of this knowledge from plant to canopy scale appears to be promising. However, many external factors like air humidity, irradiance or effect of pixel resolution bring some noise and make information extraction more complex at canopy scale. An answer to this problem can be brought by vegetation indices (IV), defined as simple arithmetic combinations of spectral bands. One of the goals of these IV is to bring out a specific canopy biophysical parameter. In our study, we try to find an IV correlated with nitrogen variability through a cornfield canopy, by means of a genetic programming-based algorithm, trained with in situ measures. This approach led us to find a model predicting nitrogen levels through the field with a coefficient of determination R² = 84.83% and a relative error RMSE = 14.34%. This result obtained with our data set improves all others models found in articles; the best of them given by Hansen & al. predicting nitrogen with R² = 70.23% and RMSE = 18.03%. The other important result is that model precision less depends on dataset size than on training data accuracy. At present, it doesn’t yet seem possible to find a general model for nitrogen assessment, efficient in all of real situations. Meanwhile, coupling “ground truth” with hyperspectral data can lead to great levels of efficiency when investigations are made with specific search algorithms.

INDEPENDENT COMPONENT ANALYSIS FOR THE CHARACTERIZATION OF HYPERSPECTRAL IMAGES IN REMOTE SENSING. (Cyril Viron)
To address some current environmental problems, hyperspectral imaging is seen as a means of obtaining the local composition of an agricultural parcel. To this end, the ex- traction of spectral signatures is of interest as it allows the characterization of an element in a specific manner. However, the obtained spectral signature from a given parcel is in fact a weighted mixture of the various elements present; the individual signature of each element is then sought : independent component analysis (ICA) could be the tool of choice to accomplish this task ! In spite of limited applications of the ICA method in this field, it was chosen because of its popularity in signal processing. One of the most recent and efficient implementation, the FastICA algorithm, was applied at first to the unmixing of grayscale images, then on classic temporal signals (to verify its efficiency) and finally on a subset of the USGS spectral signatures database. The approach was to compare the ex- tracted independent components to a reference base and form pairs based on similarity. However, due to the ambiguities and the lack of validity criterion associated with ICA, it was impossible neither to predict nor to verify the pairs. To remedy this, our experi- mental protocol was divided into theoretical and practical comparisons, which are based on confidence levels and allowed to form, on one hand, the right pairs in theory (partial base) and, on the other hand, experimental pairs (entire base). These are finally compared to determine associations’ success. Globally, based on two relative confidence thresholds, the results are excellent for signals, good for images but mediocre for spectral signatures. This last case is explained by a much more omnipresent effect of two general problems : decision-making’s subjectivity and the unavoidable decorrelation, which involved defor- mations and too large a dependence on the selected base. To improve the method, some constructive recommendations were proposed, in order to support the second portion of this work, which wanted itself innovative.

PREDICTING FRUIT MATURITY OF HASS AVOCADO USING HYPERSPECTRAL IMAGERY. (Girod, Denis)
The maturity of avocado fruit is usually assessed by measuring its dry matter content (DM), a destructive and time consuming process. The aim of this study is to introduce a quick and non-destructive technique that can estimate the dry matter content of an avocado fruit.  ‘Hass’ avocado fruits at different maturity stages and varying skin fruit color were content analyzed by hyperspectral imaging in reflectance and absorbance modes. The dry matter ranged from 19.8% to 42.5%. The hyperspectral data consist of mean spectra of avocados in the visible and near infrared regions, from 400nm to 1000nm, for a total of 163 different spectral bands.  Relationship between spectral wavelengths and dry matter content were carried out using a chemometric partial least squares (PLS) regression technique. Calibration and validation statistics, such as correlation coefficient (R²) and prediction error (RMSEP) were used as means of comparing the predictive accuracies of the different models. The results of PLS modeling, over several different randomizations of the database, with full cross validation methods using the entire spectral range, resulted in a mean R² of 0.86 with a mean RMSEP of 2.45 in reflectance mode, and a mean R² of 0.94 with a mean RMSEP of 1.59 for the absorbance mode. This indicates that reasonably accurate models (R²>0.8) could be obtained for DM content with the entire spectral range.  Also this study shows that wavelengths reduction can be applied to the problem. Starting with 163 spectral bands, the dry matter could be predicted with identical performances using 10% of the initial wavelengths (16 spectral bands).  Thus the study demonstrates the feasibility of using visible, near infrared region hyperspectral imaging in absorbance mode in order to determine a physicochemical property, namely dry matter content, of ‘Hass’ avocados in a non-destructive way. Furthermore it gives some clues about which spectral bands could be useful to this end.

A METHOD FOR COUNTING PLANT CELLS USING ARTIFICIAL VISION (Dominic Moreau)
The counting of plant cells is not a common task. Plant cells are so complex that they are still counted manually. The goal of this project is to develop a method to count Eschscholtzia californica plant cells in bioreactors. The cells are counted in liquid suspension to evaluate their concentration. Three problems have arisen in this venture which needed to be resolved; the lack of distinctive attributes, the segmentation and the estimation of cells contained in cell clusters. The lack of distinctive attributes is common to plant cells. The use of combined multiple operators allows the recognition of isolated plant cells. In order to make the segmentation more robust, the background is extracted from ten available images, and later subtracted from the image that needs to be segmented. The cell clusters pose a complex problem. First, it is very difficult to take picture of those clusters and then count them precisely. In order to have a target to compare with, the proposed hypothesis was that the average of the count of five experienced researchers will be used as a reference. At this point the volume can be use to estimate the number of cells in a cluster. Using the revolution of solids allows extracting the third dimension. The resulting volume can be divided by the volume of an isolated cell which had been estimated assuming it to be an ovoid.  Results are comparable to that of experimented researchers with an average error of 12 to 15 percent and bring constancy in the evaluation of growth rate. In order to increase the precision many simple recommendations like researching new attributes or using better material are proposed. In the meantime, an interactive tool has been developed to alleviate its lack of robustness.

INVESTIGATION OF A NEW ENVIRONNMENTAL CONTROL STRATEGY FOR COMMERCIAL POULTRY HOUSE (François Lachance)
The principal objective of this master’s thesis was to develop an objective measure of poultry thermal comfort in a commercial poultry building. From that index, the foundations of a new environmental control strategy are proposed. In partnership with Excel Technologies, this project proposes a mathematical model that can be used to calculate online heat and moisture production in a broiler house. The literature review goes through the notion of poultry thermal comfort and explains in detail the effects of various factors on thermal comfort. Using traditional equipments found in the industry, 40 days of data were recorded in order to validate the model. The data have been recorded in a commercial poultry house near Joliette, Québec with a Momentum PLC. The heat and moisture exchange models can be used to calculate total, latent and sensible heat losses by broiler chicken. From the data, it is possible to study the effect of temperature, relative humidity, air speed and light intensity on total, latent and sensible heat productions. From that project, a new method is also proposed to develop a model that can be used to measure with accuracy the ventilation rate and the different equivalent thermal resistance of poultry building surfaces. Finally, the foundations of a new control strategy based on thermal comfort of broilers are proposed in order to improve the environmental control inside Eastern Canadian’s poultry building. In further work, the use of artificial intelligence will be study in the development of that environmental control strategy based on animal comfort.

Industrial collaboration. - Projet S3I (Station d'inspection industrielle intelligente)
This project, financed in part by the Alliance program Precarn-CRIM, was aimed at perfecting an intelligent visual inspection station for small objects, namely caps and closures.  Under the supervision of the industrial partner,  I.C. Vision from Montréal, the team was composed of three main partners: IC. Vision (Serge Lévesque), the CRIM (Langis Gagnon) and the Département de Génie de la production automatisée de l'École de technologie supérieure (Jacques-André Landry).