Mohamed CHERIET

Eng., Ph.D., SMIEEE

  • Home
  • Profile
    • Research Interests
    • Awards & Recoginitions
    • Memberships
  • Research
  • Contributions
    • Publications
    • Presentations
    • Media
  • Activities
    • Supervisory Activities
    • Administrative Activities
  • Work Experience
  • Education
  • Contact

Publications


For more details and updated list of publications, please visit:synchromedia website or Google scholar website

Filter by type:

Sort by year:
April-2016

Consequences of future data centre deployment in Canada on electricity generation and environmental impacts: a 2015–2030 prospective study

Dandres T., K-K Nguyen; Nathan V. Obrekht,G., Yves L., Cheriet M,, Réjean S., Andy, W.
Journal Paper In Review to Elsevier: Pattern Recognition Letters
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) - Discovery Grant

April-2016

A Note on Quality of Experience (QoE) beyond Quality of Service (QoS) as the Baseline

Farrahi Moghaddam R., Cheriet M,
Journal Paper In Review to Information Systems Journal
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) - Discovery Grant

April-2016

Incremental Similarity for real-time on-line incremental learning systems

Režnáková, M., Tencer, L., Cheriet M
Journal Paper Published to Elsevier: CPattern Recognition Letters
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) - Discovery Grant

March-2016

Taxonomy of information security risk assessment (ISRA)

Shameli-Sendi, A., Aghababaei-Barzegar, R.,Cheriet M
Journal Paper Published to Elsevier: Computers & Security
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) - Discovery Grant

February-2016

Lexicon reduction of handwritten Arabic subwords based on the prominent shape regions

Davoudi, H.,Cheriet M, and Kabir, E.
Journal Paper Published to Springer: International Journal on Document Analysis and Recognition (IJDAR)
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) - Discovery Grant

December-2015

Optimal Countermeasure Selection using Multi-Objective Optimization on Attack-Defence Tree

Shameli-Sendi, A, Louafi, H, He, W, Cheriet M
Journal Paper Submitted to IEEE Transactions on Dependable and Secure Computing
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) - Discovery Grant

December-2015

FSITM: A Feature Similarity Index For Tone-Mapped Images

H. Ziaei Nafchi, A. Shahkolaei, R. Farrahi Moghaddam,Cheriet M
Journal Paper Published to IEEE Signal Processing Letters
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) - Discovery Grant

November-2015

Feature Set Evaluation for Offline Handwriting Recognition Systems: Application to the Recurrent Neural Network Model

Chherawala, Y., Roy, P. P.,Cheriet M
Journal Paper Published to IEEE Transactions on Cybernetics
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) - Discovery Grant

November-2015

Influence of Color-to-Gray Conversion on the Performance of Document Image Binarization: Toward a Novel Optimization Problem

Hedjam, R., Nafchi, H. Z., Kalacska, M.,Cheriet M
Journal Paper Published to IEEE Signal Processing Society
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) - Discovery Grant

October-2015

Dynamic Bandwidth-Efficient BCube Topologies for Virtualized Data Center Networks

Asghari V., Farrahi Moghaddam R., Farrahi Moghaddam F., Cheriet M
Journal Paper Submitted
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) - Discovery Grant

October-2015

A User-Friendly TOPSIS-based QoE Model for Adapted Content Selection of Slide Documents

Louafi H, Coulombe S, Cheriet M
Journal Paper Submitted to IEEE Transactions on Services Computing
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) - Discovery Grant

June-2015

Multi-Objective Optimization in Dynamic Content Adaptation of Slide Documents

Louafi, H., Coulombe, S., Cheriet M
Journal Paper Published to IEEE Transactions on Services Computing
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) - Discovery Grant

June-2015

SO-ARTIST: Self-Organized ART-2A inspired clustering for online Takagi–Sugeno fuzzy models

Režnáková, M., Tencer, L., Cheriet M
Journal Paper Published to Elsevier: Applied Soft Computing
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) - Discovery Grant

May-2015

Taxonomy of Distributed Denial of Service Mitigation Approaches for Cloud Computing

Shameli-Sendi, A., Pourzandi, M., Fekih-Ahmed, M., Cheriet M
Journal Paper Published to Elsevier: Journal of Network and Computer Applications

Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) - Discovery Grant

April-2015

Tensor representation learning based image patch analysis for text identification and recognition

Guoqiang Z. Cheriet M
Journal Paper Published to Elsevier: Pattern Recognition

Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) - Discovery Grant

March-2015

Carbon-aware distributed cloud: multi-level grouping genetic algorithm

Moghaddam, F. F., Moghaddam, R. F., Cheriet M
Journal Paper Published to Springer: Cluster Computing

Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) - Discovery Grant

March-2015

Machine Learning and Pattern Recognition Models in Change Detection

Bouchaffra, D., Cheriet M Jodoin, Pierre-Marc and Beck, D.
Journal Paper Published to Elsevier: Pattern Recognition

Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) - Discovery Grant

January-2015

TITS-FM: Transductive incremental Takagi-Sugeno fuzzy models

Tencer, L., Reznáková, M., Cheriet M
Journal Paper Published to Elsevier: Applied Soft Computing

Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) - Discovery Grant

December-2014

OpenFlow-based in-network Layer-2 adaptive multipath aggregation in data centers

Subedi, T. N., Nguyen, K. K.,, Cheriet M
Journal Paper Published to Elsevier: Computer Communications

Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) - Discovery Grant

October-2014

Environment-Aware Virtual Slice Provisioning in Green Cloud Environment

Kim-Khoa Nguyen, Cheriet M and Y. Lemieux
Journal Paper Published to IEEE Transactions on Services Computing
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) - Discovery Grant

October-2014

Ufuzzy: Fuzzy Models with Universum

Tencer L., Reznakova M. and Cheriet M
Journal Paper Submitted to Fuzzy Sets and Systems (Elsevier)
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) - Discovery Grant

October-2014

Summit-Training for Semi-Supervised Classification Tasks

Tencer L., Reznakova M. and Cheriet M
Journal Paper Submitted to Pattern Recognition (Elsevier)
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) - Discovery Grant

May-2014

Phase-based binarization of historical manuscripts: Model and application

Ziaei Nafchi H, Farrahi Moghaddam R,Cheriet M
Journal Paper Published to IEEE Transactions on Image Processing
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) - Discovery Grant

May-2014

Taxonomy of intrusion risk assessment and response system

Shameli-Sendi, A.,Cheriet M Hamou-Lhadj, A.
Journal Paper Published to Elsevier Computers & Security
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) - Discovery Grant

May-2014

Arabic word descriptor for handwritten word indexing and lexicon reduction

Youssouf, C.,Cheriet M
Journal Paper Published to Elsevier Pattern Recognition
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) - Discovery Grant

April-2014

Virtual Slice Assignment in Large-scale Cloud Interconnects

Kim-Khoa , N., Cheriet M.
Journal Paper Published to IEEE Internet Computing
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) - Discovery Grant

March-2014

Large Margin Low Rank Tensor Analysis

Guoqiang, Z.,Cheriet M
Journal Paper Published to MIT Press: Neural computation
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) - Discovery Grant

October-2013

OPENICRA: Towards a Generic Model for Automatic Deployment and Hosting of Applications in the Cloud

Gadhgadhi R, Kanso A, Khazri S, Cheriet M
Journal Paper Published -International Journal of Cloud Computing and Services Science (IJCLOSER)
Abstract

Cloud Computing offers a distributed computing environment where applications can be deployed and managed. . It is characterized by its scalability, elasticity and widely-spread use. Although the choice of such an environment may seem advantageous enough, several challenges still remain, mainly in terms of the automated deployment process of applications. This paper focuses on the design and the implementation of a new generic model for application automatic deployment, called OpenICRA, to mitigate the effects of barriers to entry, to reduce application development complexity and to simplify cloud services deployment process. We conducted two case studies to validate our proposed model. Our empirical results demonstrate the effectiveness of OpenICRA to automate and orchestrate the deployment process of different applications without any modification in their source code and optimize their implementation in terms of performance in heterogeneous Cloud environments.

Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -CRD-Ericsson

August-2013

Historical document image restoration using multispectral imaging system

Hedjam R,Cheriet M
Journal Paper Published to Pattern Recognition, 46(8), 2297–2312
Abstract
Thousands of valuable historical documents stored on the shelves of national libraries throughout the world are waiting to be scanned in order to facilitate access to the information they contain. The first major problem faced is degradation, which renders the visual quality of the document very poor, and in most cases, difficult to decipher. This work is part of our collaboration with the BAnQ (Bibliothèque et Archive Nationales de Québec), which aims to propose a new approach to provide the end user (historian, scholars, researchers, etc.) with an acceptable visualization of these images. To that end, we have adopted a multispectral imaging system capable of producing images in invisible lighting, such as infrared lights. In fact, in addition to visible (color) images, the additional information provided by the infrared spectrum as well as the physical properties of the ink (used on these historical documents) will be further incorporated into a mathematical model, transforming the degraded image into its new clean version suitable for visualization. Depending on the degree of degradation, the problem of cleaning them could be resolved by image enhancement and restoration, whereby the degradation could be isolated in the Infrared spectrum, and then eliminated in the visible spectrum. The final color image is then reconstructed from the enhanced visible spectra (red, green and blue). The first experimental results are promising and our aim in collaboration with the BAnQ, is to give this documentary heritage to the public and build an intelligent engine for accessing the documents.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) - Discovery Grant

March-2013

A learning framework for the optimization and automation of document binarization methods

Cheriet M, Farrahi Moghaddam R, Hedjam R
Journal Paper Published -Computer Vision and Image Understanding, 117(3), 269–280
Abstract
Almost all binarization methods have a few parameters that require setting. However, they do not usually achieve their upper-bound performance unless the parameters are individually set and optimized for each input document image. In this work, a learning framework for the optimization of the binarization methods is introduced, which is designed to determine the optimal parameter values for a document image. The framework, which works with any binarization method, has a standard structure, and performs three main steps: (i) extracts features, (ii) estimates optimal parameters, and (iii) learns the relationship between features and optimal parameters. First, an approach is proposed to generate numerical feature vectors from 2D data. The statistics of various maps are extracted and then combined into a final feature vector, in a nonlinear way. The optimal behavior is learned using support vector regression (SVR). Although the framework works with any binarization method, two methods are considered as typical examples in this work: the grid-based Sauvola method, and Lu’s method, which placed first in the DIBCO’09 contest. The experiments are performed on the DIBCO’09 and H-DIBCO’10 datasets, and combinations of these datasets with promising results.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

January-2013

3D curvilinear structure detection filter via structure-ball analysis

Rivest-Hénault D,Cheriet M
Journal Paper Published - IEEE Transaction on Image Processing, PP(99), 1
Abstract
Curvilinear structure detection filters are crucial building blocks in many medical image processing applications, where they are used to detect important structures, such as blood vessels, airways, and other similar fibrous tissues. Unfortunately, most of these filters are plagued by an implicit single structure direction assumption, which results in a loss of signal around bifurcations. This peculiarity limits the performance of all subsequent processes, such as understanding angiography acquisitions, computing an accurate segmentation or tractography, or automatically classifying image voxels. This paper presents a new 3-D curvilinear structure detection filter based on the analysis of the structure ball, a geometric construction representing second order differences sampled in many directions. The structure ball is defined formally, and its computation on a discreet image is discussed. A contrast invariant diffusion index easing voxel analysis and visualization is also introduced, and different structure ball shape descriptors are proposed. A new curvilinear structure detection filter is defined based on the shape descriptors that best characterize curvilinear structures. The new filter produces a vesselness measure that is robust to the presence of X- and Y-junctions along the structure by going beyond the single direction assumption. At the same time, it stays conceptually simple and deterministic, and allows for an intuitive representation of the structure’s principal directions. Sample results are provided for synthetic images and for two medical imaging modalities.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

January-2013

A Modified GHG Intensity Indicator: Toward a Sustainable Global Economy based on a Carbon Border Tax and Emissions Trading

Farrahi Moghaddam R, Farrahi Moghaddam F,Cheriet M
Journal Paper Published - Energy Policy, PP(99),363-380
Abstract
It will be difficult to gain the agreement of all the actors on any proposal for climate change management, if universality and fairness are not considered. In this work, a universal measure of emissions to be applied at the international level is proposed, based on a modification of the Greenhouse Gas Intensity (GHG-INT) measure. It is hoped that the generality and low administrative cost of this measure, which we call the Modified Greenhouse Gas Intensity measure (MGHG-INT), will eliminate any need to classify nations. The core of the MGHG-INT is what we call the IHDI-adjusted Gross Domestic Product (IDHIGDP), based on the Inequality-adjusted Human Development Index (IHDI). The IDHIGDP makes it possible to propose universal measures, such as MGHG-INT. We also propose a carbon border tax applicable at national borders, based on MGHG-INT and IDHIGDP. This carbon tax is supported by a proposed global Emissions Trading System (ETS). The proposed carbon tax is analyzed in a short-term scenario, where it is shown that it can result in a significant reduction in global emissions while keeping the economy growing at a positive rate. In addition to annual GHG emissions, cumulative GHG emissions over two decades are considered with almost the same results.
Funding Resources

CANARIE - GreenStar Network

January-2013

Powering Network of Datacenters By Renewable Energy: A Green Testbed

Nguyen K K, Cheriet M, Lemay M, Savoie M, Ho B
Journal Paper Published - IEEE Internet Computing Magazine, 17(1), 40 - 49
Abstract
Today’s information and communications technology (ICT) services emit an increasing amount of greenhouse gases. Carbon footprint models can enable research into ICT energy efficiency and carbon reduction. The GreenStar Network (GSN) testbed is a prototype wide-area network of data centers powered by renewable energy sources. Through their work developing the GSN, the authors have researched fundamental aspects of green ICT such as virtual infrastructure, unified management of compute, network, power, and climate resources, smart power control, and a carbon assessment protocol.
Funding Resources

CANARIE - GreenStar Network

January-2013

Enabling infrastructure as a service (IaaS) on IP networks: from distributed to virtualized control plane

Nguyen K K, Cheriet M,Lemay M
Journal Paper Published - IEEE Communications Magazine, 51(1), 136 - 144
Abstract
Infrastructure as a Service (IaaS) is considered a prominent model for IP based service delivery. As grid and cloud computing have become a stringent demand for today's Internet services, IaaS is required for providing services, particularly "private cloud," regardless of physical infrastructure locations. However, enabling IaaS on traditional Internet Service Provider (ISP) network infrastructures is challenging because IaaS requires a high abstraction level of network architectures, protocols, and devices. Network control plane architecture plays therefore an essential role in this transition, particularly with respect to new requirements of scalability, reliability, and flexibility. In this article we review the evolutionary trend of network element control planes from monolithic to distributed architectures according to network growth, and then present a new virtualization oriented architecture that allows infrastructure providers and service providers to achieve service delivery independently and transparently to end users based on virtualized network control planes. As a result, current ISP infrastructures will be able to support new services, such as heavy resource consuming data center applications. We also show how to use network virtualization for providing cloud computing and data center services in a flexible manner on the nationwide CANARIE network infrastructure.
Funding Resources

CANARIE - GreenStar Network

December-2012

Maximum Entropy Gibbs Density Modeling for Pattern Classification

Mezghani N, Mitiche A,Cheriet Mohamed
Journal Paper Published - Entropy, 14(12), 2478-2491
Abstract
Recent studies have shown that the Gibbs density function is a good model for visual patterns and that its parameters can be learned from pattern category training data by a gradient algorithm optimizing a constrained entropy criterion. These studies represented each pattern category by a single density. However, the patterns in a category can be so complex as to require a representation spread over several densities to more accurately account for the shape of their distribution in the feature space. The purpose of the present study is to investigate a representation of visual pattern category by several Gibbs densities using a Kohonen neural structure. In this Gibbs density based Kohonen network, which we call a Gibbsian Kohonen network, each node stores the parameters of a Gibbs density. Collectively, these Gibbs densities represent the pattern category. The parameters are learned by a gradient update rule so that the corresponding Gibbs densities maximize entropy subject to reproducing observed feature statistics of the training patterns. We verified the validity of the method and the efficiency of the ensuing Gibbs density pattern representation on a handwritten character recognition application.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

December-2012

Handwritten Digit Recognition by Fourier-Packet Descriptors

Mezghani N, Mitiche A,Cheriet M
Journal Paper Published - Electronic Letters on Computer Vision and Image Analysis, 11(1), 68-76
Abstract
Any statistical pattern recognition system includes a feature extraction component. For character patterns, several feature families have been tested, such as the Fourier-Wavelet Descriptors. We are proposing here a generalization of this family: the Fourier-Packet Descriptors. We have tested a set of 72 of these features on handwritten digits: the error rate was 2.44% with classifier 1NN for 19 features selected from the set and 1.72% with classifier SVM for all the set.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

November-2012

Convergence of Cloud Computing and Network Virtualization: Towards a Neutral Carbon Network

Lemay M, Nguyen K K, St-Arnaud B,Cheriet M
Journal Paper Published - IEEE Internet Computing Magazine, 16(6), 51- 59
Abstract
Reducing greenhouse gas (GHG) emissions is one of the most challenging research topics in ICT because of people's overwhelming use of electronic devices. Current solutions focus mainly on efficient power consumption at the micro level; few consider large-scale energy-management strategies. The low-carbon, nationwide GreenStar Network in Canada uses network and server virtualization techniques to migrate data center services among network nodes according to renewable energy availability. The network deploys a "follow the sun, follow the wind" optimization policy as a virtual infrastructure-management technique.
Funding Resources

CANARIE - GreenStar Network

September-2012

W-TSV: Weighted topological signature vector for lexicon reduction in handwritten Arabic documents

Chherawala Y,Cheriet M
Journal Paper Published - Pattern Recognition, 45(9), 3277-3287
Abstract
This paper proposes a holistic lexicon-reduction method for ancient and modern handwritten Arabic documents. The word shape is represented by the weighted topological signature vector (W-TSV), which encodes graph data into a low-dimensional vector space. Three directed acyclic graph (DAG) representations are proposed for Arabic word shapes, based on topological and geometrical features. Lexicon reduction is achieved by a nearest neighbors search in the W-TSV space. The proposed framework has been tested on the IFN/ENIT and the Ibn Sina databases, achieving respectively a degree of reduction of 83.5% and 92.9% for an accuracy of reduction of 90%.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

August-2012

Non-Rigid 2D/3D Registration of Coronary Artery Models with Live Fluoroscopy for Guidance of Cardiac Interventions

Rivest-Hénault D, Sundar H,Cheriet M
Journal Paper Published - IEEE Transactions on Medical Imaging, 31(8), 1557-1572
Abstract
A 2D/3D nonrigid registration method is proposed that brings a 3D centerline model of the coronary arteries into correspondence with bi-plane fluoroscopic angiograms. The registered model is overlaid on top of interventional angiograms to provide surgical assistance during image-guided chronic total occlusion procedures, thereby reducing the uncertainty inherent in 2D interventional images. The proposed methodology is divided into two parts: global structural alignment and local nonrigid registration. In both cases, vessel centerlines are automatically extracted from the 2D fluoroscopic images, and serve as the basis for the alignment and registration algorithms. In the first part, an energy minimization method is used to estimate a global affine transformation that aligns the centerline with the angiograms. The performance of nine general purpose optimizers has been assessed for this problem, and detailed results are presented. In the second part, a fully nonrigid registration method is proposed and used to compensate for any local shape discrepancy. This method is based on a variational framework, and uses a simultaneous matching and reconstruction process to compute a nonrigid registration. With a typical run time of less than 3 s, the algorithms are fast enough for interactive applications. Experiments on five different subjects are presented and show promising results.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

August-2012

Real-time knowledge-based processing of images: Application of the online NLPM method to perceptual visual analysis

Farrahi Moghaddam R,Cheriet M,
Journal Paper Published - IEEE Transactions on Image Processing, 21(8), 3390-3404
Abstract
Perceptual analysis is an interesting topic in the field of image processing, and can be considered a missing link between image processing and human vision. Of the various forms of perception, one of the most important and best known is shape perception. In this paper, a framework based on the online nonlocal patch means (NLPM) method is developed, which is designed to infer possible perceptual observations of an input image using the knowledge images provided. Thanks to the speed of online NLPM, the proposed method can simulate the transformation of the input image to the final perceptual image in real time. In order to improve the performance of the method, a hidden chain series is considered for the model that delivers faster convergence. The capability of the method is evaluated on several well-known perceptual examples.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

June-2012

AdOtsu: An adaptive and parameterless generalization of Otsu’s method for document image binarization

Farrahi Moghaddam RCheriet M
Journal Paper Published - Pattern Recognition, 45(6), 2419-2431
Abstract
Adaptive binarization methods play a central role in document image processing. In this work, an adaptive and parameterless generalization of Otsu's method is presented. The adaptiveness is obtained by combining grid-based modeling and the estimated background map. The parameterless behavior is achieved by automatically estimating the document parameters, such as the average stroke width and the average line height. The proposed method is extended using a multiscale framework, and has been applied on various datasets, including the DIBCO'09 dataset, with promising results.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

June-2012

A local linear level set method for the binarization of degraded historical document images

Rivest-Hénault D, Farrahi Moghaddam R,Cheriet Mohamed
Journal Paper Published - International Journal on Document Analysis and Recognition, 15(2), 101-124
Abstract
Document image binarization is a difficult task, especially for complex document images. Nonuniform background, stains, and variation in the intensity of the printed characters are some examples of challenging document features. In this work, binarization is accomplished by taking advantage of local probabilistic models and of a flexible active contour scheme. More specifically, local linear models are used to estimate both the expected stroke and the background pixel intensities. This information is then used as the main driving force in the propagation of an active contour. In addition, a curvature-based force is used to control the viscosity of the contour and leads to more natural-looking results. The proposed implementation benefits from the level set framework, which is highly successful in other contexts, such as medical image segmentation and road network extraction from satellite images. The validity of the proposed approach is demonstrated on both recent and historical document images of various types and languages. In addition, this method was submitted to the Document Image Binarization Contest (DIBCO’09), at which it placed 3rd.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

March-2012

A Protocol for Quantifying the Carbon Reductions Achieved Through the Provision of Low or Zero Carbon ICT Services

Steenhof P, Weber C, Brooks M, Spence J, Robinson R, Fry B, Simmonds R, Kiddle C, Aikema D, Savoie M, Ho B, *Lemay M, Fung J-Hénault D, Farrahi Moghaddam R,Cheriet Mohamed
Journal Paper Published - Journal of Sustainable Computing: Informatics and Systems, 2(1), 23–32
Abstract
In this article we present a protocol which has been developed for the purposes of providing guidance for estimating the emission reductions that could result from the provision or sourcing of low or zero carbon information and communication technology (ICT) services. This is an increasingly important topic not only because ICT has growing environmental impacts, but also due the technical complexities which underlie the delivery of ICT as a service, especially in respect of the growing use of cloud computing and the provision of ICT services over the internet. The protocol can be used both for creating emission reductions for carbon trading, and the quantification and reporting of related low or zero carbon ICT initiatives within corporate sustainability reports.
Funding Resources

CANARIE - GreenStar Network

January-2012

Environmental-Aware Virtual Data Center Network

Nguyen K K, Cheriet M, Lemay M, Reijs V, Mackarel A, Pastrama A
Journal Paper Published - Journal of Computer Networks, 56(10), 2538-2550
Abstract
Cloud computing services have recently become a ubiquitous service delivery model, covering a wide range of applications from personal file sharing to being an enterprise data warehouse. Building green data center networks providing cloud computing services is an emerging trend in the Information and Communication Technology (ICT) industry, because of Global Warming and the potential GHG emissions resulting from cloud services. As one of the first worldwide initiatives provisioning ICT services entirely based on renewable energy such as solar, wind and hydroelectricity across Canada and around the world, the GreenStar Network (GSN) was developed to dynamically transport user services to be processed in data centers built in proximity to green energy sources, reducing Greenhouse Gas (GHG) emissions of ICT equipments. Regarding the current approach, which focuses mainly in reducing energy consumption at the micro-level through energy efficiency improvements, the overall energy consumption will eventually increase due to the growing demand from new services and users, resulting in an increase in GHG emissions. Based on the cooperation between Mantychore FP7 and the GSN, our approach is, therefore, much broader and more appropriate because it focuses on GHG emission reductions at the macro-level. This article presents some outcomes of our implementation of such a network model, which spans multiple green nodes in Canada, Europe and the USA. The network provides cloud computing services based on dynamic provision of network slices through relocation of virtual data centers.
Funding Resources

CANARIE - GreenStar Network

January-2012

Time-Frequency Distributions based on Compact Support Kernels: Properties and Performance Evaluation

Abed M, Belouchrani A,Cheriet M, Boashash B
Journal Paper Published - IEEE Transactions on Signal Processing, 60(6), 2814-2827
Abstract
This paper presents two new time-frequency distributions (TFDs) based on kernels with compact support (KCS) namely the separable (CB) (SCB) and the polynomial CB (PCB) TFDs. The implementation of this family of TFDs follows the method developed for the Cheriet-Belouchrani (CB) TFD. The mathematical properties of these three TFDs are analyzed and their performance is compared to the best classical quadratic TFDs using several tests on multicomponent signals with linear and nonlinear frequency modulation (FM) components including the noise effects. Instead of relying solely on visual inspection of the time-frequency domain plots, comparisons include the time slices' plots and the evaluation of the Boashash-Sucic's normalized instantaneous resolution performance measure that permits to provide the optimized TFD using a specific methodology. In all presented examples, the KCS-TFDs show a significant interference rejection, with the component energy concentration around their respective instantaneous frequency laws yielding high resolution measure values.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

October-2011

Non-local adaptive tensors: Application to anisotropic diffusion and shock filtering

Doré V, Farrahi Moghaddam R,Cheriet Mohamed
Journal Paper Published - Image and Vision Computing, 29(11), 730-743
Abstract
Structure tensors are used in several PDE-based methods to estimate information on the local structure in the image, such as edge orientation. They have become a common tool in many image processing applications. To integrate the local data information, the structure tensor is based on a local regularization of a tensorial product. In this paper, we propose a new regularization model based on the non-local properties of the tensor product. The resulting non-local structure tensor is effective in the restitution of the non homogeneity of the local orientation of the structures. It is particularly efficient in texture regions where patches repeat non locally. The new tensor regularization also offers the advantage of automatically adapting the smoothing parameter to the local structures of the tensor product. Finally, we explain how this new adaptive structure tensor can be plugged into two PDEs: an anisotropic diffusion and a shock filter. Comparisons with other tensor regularization methods and other PDEs demonstrate the clear advantage of using the non-local structure tensor.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

September-2011

Help-Training for Semi-supervised Learning

Adankon M M,Cheriet M
Journal Paper Published - Pattern Recognition, 44(9), 2220-2230
Abstract
Help-training for semi-supervised learning was proposed in our previous work in order to reinforce self-training strategy by using a generative classifier along with the main discriminative classifier. This paper extends the Help-training method to least squares support vector machine (LS-SVM) where labeled and unlabeled data are used for training. Experimental results on both artificial and real problems show its usefulness when comparing with other classical semisupervised methods.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

September-2011

A spatially adaptive statistical method for the binarization of historical manuscripts and degraded document images

Hedjam R, Farrahi Moghaddam R,Cheriet M
Journal Paper Published - Pattern Recognition, 44(9), 2184-2196
Abstract
In this paper, we present an adaptive method for the binarization of historical manuscripts and degraded document images. The proposed approach is based on maximum likelihood (ML) classification and uses a priori information and the spatial relationship on the image domain. In contrast with many conventional methods that use a decision based on thresholding, the proposed method performs a soft decision based on a probabilistic model. The main idea is that, from an initialization map (under-binarization) containing only the darkest part of the text, the method is able to recover the main text in the document image, including low-intensity and weak strokes. To do so, fast and robust local estimation of text and background features is obtained using grid-based modeling and inpainting techniques; then, the ML classification is performed to classify pixels into black and white classes. The advantage of the proposed method is that it preserves weak connections and provides smooth and continuous strokes, thanks to its correlation-based nature. Performance is evaluated both subjectively and objectively against standard databases. The proposed method outperforms the state-of-the-art methods presented in the DIBCO’09 binarization contest, although those other methods provide performance close to it.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

August-2011

Leveraging Green Communications for Carbon Emission Reductions: Techniques, Testbeds and Emerging Carbon Footprint Standards

Despins C, Labeau F, Labelle R, Ngoc T.L, McNeil J, Leon-Garcia A,Cheriet M,Cherkaoui O, Lemieux Y, Lemay M, Thibeault C, Gagnon F, R, Farrahi Moghaddam R
Journal Paper Published - IEEE Communications Magazine, 49(8), 101-109
Abstract
Green communication systems and, in broader terms, green information and communications technologies have the potential to significantly reduce greenhouse gas emissions worldwide. This article provides an overview of two issues related to achieving the full carbon abatement potential of ICT. First, green communications research challenges are discussed, notably as they pertain to networking issues. Various initiatives regarding green ICT testbeds are presented in the same realm in order to validate the "green performance" and functionality of such greener cyber-infrastructure. Second, this article offers a description of ongoing international efforts to standardize methodologies that accurately quantify the carbon abatement potential of ICTs, an essential tool to ensure the economic viability of green ICT in the low carbon economy and carbon credit marketplace of the 21st century.
Funding Resources

CANARIE - GreenStar Network

April-2011

Category-based Error handling for Handwritten Text Recognition Green Communications for Carbon Emission Reductions: Techniques, Testbeds and Emerging Carbon Footprint Standards

Quiniou S,Cheriet M, Anquetil E
Journal Paper Published - Intl. Journal on Document Analysis and Recognition
Abstract
In this paper, we present a framework to handle recognition errors from a N-best list of output phrases given by a handwriting recognition system, with the aim to use the resulting phrases as inputs to a higher-level application. The framework can be decomposed into four main steps: phrase alignment, detection, characterization, and correction of word error hypotheses. First, the N-best phrases are aligned to the top-list phrase, and word posterior probabilities are computed and used as confidence indices to detect word error hypotheses on this top-list phrase (in comparison with a learned threshold). Then, the errors are characterized into predefined types, using the word posterior probabilities of the top-list phrase and other features to feed a trained SVM. Finally, the final output phrase is retrieved, thanks to a correction step that used the characterized error hypotheses and a designed word-to-class backoff language model. First experiments were conducted on the ImadocSen-OnDB handwritten sentence database and on the IAM-OnDB handwritten text database, using two recognizers. We present first results on an implementation of the proposed framework for handling recognition errors on transcripts of handwritten phrases provided by recognition systems.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

April-2011

Semisupervised Learning Using Bayesian Interpretation: Application to LS-SVM

Adankon M M,Cheriet M, Biem A
Journal Paper Published - IEEE Transactions on Neural Networks, 22(4), 513-524
Abstract
Bayesian reasoning provides an ideal basis for representing and manipulating uncertain knowledge, with the result that many interesting algorithms in machine learning are based on Bayesian inference. In this paper, we use the Bayesian approach with one and two levels of inference to model the semisupervised learning problem and give its application to the successful kernel classifier support vector machine (SVM) and its variant least-squares SVM (LS-SVM). Taking advantage of Bayesian interpretation of LS-SVM, we develop a semisupervised learning algorithm for Bayesian LS-SVM using our approach based on two levels of inference. Experimental results on both artificial and real pattern recognition problems show the utility of our method.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

February-2011

Unsupervised MRI segmentation of brain tissues using a local linear model and level set

Rivest-Hénault D,Cheriet M
Journal Paper Published - Magnetic Resonance Imaging, 29(2), 243-259
Abstract
Real-world magnetic resonance imaging of the brain is affected by intensity nonuniformity (INU) phenomena which makes it difficult to fully automate the segmentation process. This difficult task is accomplished in this work by using a new method with two original features: (1) each brain tissue class is locally modeled using a local linear region representative, which allows us to account for the INU in an implicit way and to more accurately position the region's boundaries; and (2) the region models are embedded in the level set framework, so that the spatial coherence of the segmentation can be controlled in a natural way. Our new method has been tested on the ground-truthed Internet Brain Segmentation Repository (IBSR) database and gave promising results, with Tanimoto indexes ranging from 0.61 to 0.79 for the classification of the white matter and from 0.72 to 0.84 for the gray matter. To our knowledge, this is the first time a region-based level set model has been used to perform the segmentation of real-world MRI brain scans with convincing results.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

February-2011

Beyond pixels and regions: A nonlocal patch means (NLPM) method for content-level restoration, enhancement, and reconstruction of degraded document images

Farrahi Moghaddam RCheriet M
Journal Paper Published - Pattern Recognition, 44(2), 363-374
Abstract
A patch-based non-local restoration and reconstruction method for preprocessing degraded document images is introduced. The method collects relative data from the whole input image, while the image data are first represented by a content-level descriptor based on patches. This patch-equivalent representation of the input image is then corrected based on similar patches identified using a modified genetic algorithm (GA) resulting in a low computational load. The corrected patch-equivalent is then converted to the output restored image. The fact that the method uses the patches at the content level allows it to incorporate high-level restoration in an objective and self-sufficient way. The method has been applied to several degraded document images, including the DIBCO’09 contest dataset with promising results.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

November-2010

Genetic Algorithm based training for Semisupervised SVM

Adankon M M,Cheriet M
Journal Paper Published - Neural Computing and Applications, 19(8), 1197-1206
Abstract
The Support Vector Machine (SVM) is an interesting classifier with excellent power of generalization. In this paper, we consider applying the SVM to semi-supervised learning. We propose using an additional criterion with the standard formulation of the semi-supervised SVM (S 3 VM) to reinforce classifier regularization. Since, we deal with nonconvex and combinatorial problem, we use a genetic algorithm to optimize the objective function. Furthermore, we design the specific genetic operators and certain heuristics in order to improve the optimization task. We tested our algorithm on both artificial and real data and found that it gives promising results in comparison with classical optimization techniques proposed in literature.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

August-2010

A multi-scale framework for adaptive binarization of degraded document images

Farrahi Moghaddam R,Cheriet M
Journal Paper Published - Pattern Recognition, 43(6), 2186-2198
Abstract
In this work, a multi-scale binarization framework is introduced, which can be used along with any adaptive threshold-based binarization method. This framework is able to improve the binarization results and to restore weak connections and strokes, especially in the case of degraded historical documents. This is achieved thanks to localized nature of the framework on the spatial domain. The framework requires several binarizations on different scales, which is addressed by introduction of fast grid-based models. This enables us to explore high scales which are usually unreachable to the traditional approaches. In order to expand our set of adaptive methods, an adaptive modification of Otsu's method, called AdOtsu, is introduced. In addition, in order to restore document images suffering from bleed-through degradation, we combine the framework with recursive adaptive methods. The framework shows promising performance in subjective and objective evaluations performed on available datasets.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

January-2010

New PCA-based Face Authentication Approach for Smart-Card Implementation

Sehad A, Bessah N, Touari I, Benfattoum Y, Khali H,Cheriet M
Journal Paper Published - International Review on Computers and Software, 5(4), 384-389
Abstract
In this paper, we present a new PCA-based face verification approach for smart cards implementation. In fact, our scheme deals with the reduced storage space of smart cards. First of all the DCT2 and then the self Eigen face, are respectively applied for the training step and in the decision step, a new similarity index based on the weighted distance by the representation quality of individuals is used. Experimental results, using AR face database, show a better recognition rate compared to popular distances such as Euclidian, Manhattan and Mahalanobis.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

January-2010

A Variational Approach to Degraded Document Enhancement

Farrahi Moghaddam R,Cheriet M
Journal Paper Published - IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(8), 1347-1361
Abstract
The goal of this paper is to correct bleed-through in degraded documents using a variational approach. The variational model is adapted using an estimated background according to the availability of the verso side of the document image. Furthermore, for the latter case, a more advanced model based on a global control, the flow field, is introduced. The solution of each resulting model is obtained using wavelet shrinkage or a time-stepping scheme, depending on the complexity and nonlinearity of the models. When both sides of the document are available, the proposed model uses the reverse diffusion process for the enhancement of double-sided document images. The results of experiments with real and synthesized samples are promising. The proposed model, which is robust with respect to noise and complex background, can also be applied to other fields of image processing.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

December-2009

Model selection for the LS-SVM. Application to handwriting recognition

Adankon M M,Cheriet M
Journal Paper Published - Pattern Recognition, 42(12), 3264-3270
Abstract
The support vector machine (SVM) is a powerful classifier which has been used successfully in many pattern recognition problems. It has also been shown to perform well in the handwriting recognition field. The least squares SVM (LS-SVM), like the SVM, is based on the margin-maximization principle performing structural risk minimization. However, it is easier to train than the SVM, as it requires only the solution to a convex linear problem, and not a quadratic problem as in the SVM. In this paper, we propose to conduct model selection for the LS-SVM using an empirical error criterion. Experiments on handwritten character recognition show the usefulness of this classifier and demonstrate that model selection improves the generalization performance of the LS-SVM.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

December-2009

Correcting Arabic OCR Errors using Improved Topic based language Models

Mamish S,Cheriet Mohamed
Journal Paper Published - Int. Journal of Computer Processing of Oriental Languages, 22(4), 321-340
Abstract
The OCR output of scanned document images suffers from recognition errors especially when dealing with languages that are characterized by particularities and rich morphology such as the Arabic language, thus an effective error correction model is greatly needed. This paper focuses on three aspects of post-processing correction. First, improving the alignment and error n-gram models by adding correction rules based on character meta-classes rather than on specific characters, which is more suitable for the Arabic language. Second, using the language models to understand and correct the Arabic word fragment resulting from agglutinated affixes or isolated letters. The last will concern improving the language models by adding semantic information to the correction process, by using the bidirectional n-grams, stemming and removing stop words, which gives higher weights to n-grams sharing semantic meanings. In addition, we use a topic corpus, not a global one for a better probability distribution. The proposed model is effective in correcting the lexical errors and covered the semantic ones, that were not frequently reported by OCRs and are corrected after a manual proofreading. The proposed method shows an increase in the correction rate of almost 13% especially in meaningful terms.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

December-2009

New Frontiers in Handwriting Recognition

Cheriet Mohamed, Bunke H, Hu J, Kimura F, Suen C Y
Journal Paper Published - Pattern Recognition, 42(12), 3129-3130. Journal of Computer Processing of Oriental Languages, 22(4), 321-340
Abstract
After more than 20 years of continuous and intensive effort devoted to solving the challenges of handwriting recognition, progress in recent years has been very promising. Those challenges are now considered to constitute a millennium problem (cf. addendum p. xx).
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

December-2009

Handwriting recognition research: Twenty years of achievement...and beyond

Cheriet Mohamed, El-Yacoubi M , Fujisawa H, Lopresti D P, Lorette G
Journal Paper Published - Pattern Recognition, 42(12), 3131-3135
Abstract
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

December-2009

RSLDI: Restoration of single-sided low-quality document images

Farrahi Moghaddam R,Cheriet M
Journal Paper Published - Pattern Recognition, 42(12), 3355-3364
Abstract
This paper addresses the problem of enhancing and restoring single-sided low-quality single-sided document images. Initially, a series of multi-level classifiers is introduced covering several levels, including the regional and content levels. These classifiers can then be integrated into any enhancement or restoration method to generalize or improve them. Based on these multi-level classifiers, we first propose a novel PDE-based method for the restoration of the degradations in single-sided document images. To reduce the local nature of PDE-based methods, we empower our method with two flow fields to play the role of regional classifiers and help in preserving meaningful pixels. Also, the new method further diffuses the background information by using a content classifier, which provides an efficient and accurate restoration of the degraded backgrounds. The performance of the method is tested on both real samples, from the Google Book Search dataset, UNESCO's Memory of the World Programme, and the Juma Al Majid (Dubai) datasets, and synthesized samples provided by our degradation model. The results are promising. The method-independent nature of the classifiers is illustrated by modifying the ICA method to make it applicable to single-sided documents, and also by providing a Bayesian binarization model.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

December-2009

Semi-Supervised Least Squares Support Vector Machine

Adankon M M ,Cheriet M
Journal Paper Published - IEEE Transactions on Neural Networks, 20(12), 1858-1870
Abstract
The least squares support vector machine (LS-SVM), like the SVM, is based on the margin-maximization performing structural risk and has excellent power of generalization. In this paper, we consider its use in semisupervised learning. We propose two algorithms to perform this task deduced from the transductive SVM idea. Algorithm 1 is based on combinatorial search guided by certain heuristics while Algorithm 2 iteratively builds the decision function by adding one unlabeled sample at the time. In term of complexity, Algorithm 1 is faster but Algorithm 2 yields a classifier with a better generalization capacity with only a few labeled data available. Our proposed algorithms are tested in several benchmarks and give encouraging results, confirming our approach.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

December-2009

Image Watermarking Based on the Hessenberg Transform

Seddik H, Sayadi M, Fnaiech F,Cheriet M
Journal Paper Published - Int. Journal on Image Graphics, 9(3), 411-433
Abstract
Watermarking is now considered as an efficient means for assuring copyright protection and data owner identification. Watermark embedding techniques depend on the representation domain of the image (spatial, frequency, and multiresolution). Every domain has its specific advantages and limitations. Moreover, each technique in a chosen domain is found to be robust to specific sets of attack types. So we need to propose more robust domains to defeat these limitations and respect all the watermarking criterions (capacity, invisibility and robustness). In this paper, a new watermarking method is presented using a new domain for the image representation and the watermark embedding: the mathematical Hessenberg transformation. This domain is found to be robust against a wide range of STIRMARK attacks such as JPEG compression, convolution filtering and noise adding. The robustness of the new technique in preserving and extracting the embedded watermark is proved after various attacks types. It is also improved when compared with other methods in use. In addition, the proposed method is blind and the use of the host image is not needed in the watermark detection process.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

May-2009

SRobust NL-means filter with optimal smoothing parameter for statistical image denoising

Doré V ,Cheriet M
Journal Paper Published - IEEE Trans. on Signal Processing, 57(5), 1703-1716
Abstract
Most denoising methods require that some smoothing parameters be set manually to optimize their performance. Among these methods, a new filter based on nonlocal weighting (NL-means filter) has been shown to have a very attractive denoising capacity. In this paper, we propose fixing the smoothing parameter of this filter automatically. The smoothing parameter corresponds to the bandwidth h of a local constant regression. We use the Cp statistic embedded in Newton's method to optimize h in a point-wise fashion. This statistic also has the advantage of being a reliable measure of the quality of the denoising process for each pixel. In addition, we introduce a robust regression in the NL-means filter designed to greatly reduce the blur yielded by the weighting. Finally, we show how the automatic denoising model can be extended to images degraded by multiplicative noise. Experiments conducted on images with additive and multiplicative noise demonstrate a high denoising power with a degree of detail preservation...
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

April-2009

Low quality document image modeling and enhancemen

Farrahi Moghaddam R,Cheriet Mohamed
Journal Paper Published - International Journal of Document Analysis and Recognition, 11(4), 183-201
Abstract
In order to tackle problems such as shadow- through and bleed-through, a novel defect model is developed which generates physically damaged document images. This model addresses physical degradation, such as aging and ink seepage. Based on the diffusive nature of the physical defects, the model is designed using virtual diffusion processes. Then, based on this degradation model, a restoration method is proposed and used to fix the bleed-through effect in double-sided document images using the reverse diffusion process. Subjective and objective evaluations are performed on both the degradation model and the restoration method. The experiments show promising results on both real and generated data.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

January-2009

The Combined Statistical Stepwise and Iterative Neural Network Pruning Algorithm

Fnaiech N, Fnaiech F, Jervis B W ,Cheriet Mohamed
Journal Paper Published - Intelligent Automation and Soft Computing, 15(4), 573-589
Abstract
In this paper, we present a new pruning algorithm formed by combining the Statistical Stepwise Method (SSM) [1] with the Iterative Pruning (IP) [4] algorithms. This proposed algorithm (SSIP) is used to simultaneously remove unnecessary neurons or weight connections from a given feed-forward neural network (NN) in order to “optimize” its structure. Some modifications to the previous pruning algorithms published in [1] and [4] are also reported. Two versions of the combined SSIP are considered: In the fast version, SSIPl, the modified IP is fast applied to the given neural network in order to prune insignificant units, and then the modified SSM is applied to the pruned network to remove unnecessary links. In the second version, SSIP2, the above procedure is applied to each layer in turn, working from the input layer to the output layer. The performances of the algorithms are compared using two real world applications, brain disease detection and texture classification, and the superiority of the SSIP pruning algorithm is demonstrated. This new algorithm can eliminate approximately 59% of the links in the initial oversized network in order to improve performance by approximately + 39% and + 26% for the sensitivity of learning, and generalization, respectively.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

January-2009

2D PCA based techniques in DCT domain for face recognition

Bengharabi M, Mezai L, Harizi F, Guessoum A,Cheriet M
Journal Paper Published - Intelligent Systems Technologies and Applications, 7(3), 243-265
Abstract
In this paper, we introduce two-dimensional PCA (2DPCA), diagonal principal component analysis (DiaPCA) and DiaPCA+2DPCA in DCT domain for the aim of face recognition. The 2D discrete cosine transform (2D DCT) transform has been used as a pre-processing step, then 2DPCA, DiaPCA and DiaPCA+2DPCA are applied on the upper left corner block of the global 2D DCT transform matrix of the original images. The Olivetti Research Laboratory (ORL) and YALE face databases are used to compare the proposed approach with the conventional one without DCT under four matrix similarity measures: Frobenuis, Yang, assembled matrix distance (AMD) and volume measure (VM). The experiments show that in addition to the significant gain in both the training and testing times, the recognition rate using 2DPCA, DiaPCA and DiaPCA+2DPCA in DCT domain is generally better or at least competitive with the recognition rates obtained by applying these three 2D appearance-based statistical techniques directly on the raw pixel images; especially, under the VM similarity measure.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

July-2008

Bayes Classification of Online Arabic Characters by Gibbs Modeling of Class Conditional Densities

Mezghani N, Mitiche A ,Cheriet M
Journal Paper Published - IEEE Trans. Pattern Analysis and Machine Intelligence, 30(7), 1121-1131
Abstract
This study investigates Bayes classification of online Arabic characters using histograms of tangent differences and Gibbs modeling of the class-conditional probability density functions. The parameters of these Gibbs density functions are estimated following the Zhu et al. constrained maximum entropy formalism, originally introduced for image and shape synthesis. We investigate two partition function estimation methods: one uses the training sample, and the other draws from a reference distribution. The efficiency of the corresponding Bayes decision methods, and of a combination of these, is shown in experiments using a database of 9,504 freely written samples by 22 writers. Comparisons to the nearest neighbor rule method and a Kohonen neural network method are provided.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

January-2008

Gray-level Texture Characterization based on a New Adaptive Nonlinear Auto-Regressive Filter

Sayadi M, *Sakrane S, *Fnaiech F,Cheriet M
Journal Paper Published - Electronic Letters on Computer Vision and Image Analysis, 7(1), 40-53
Abstract
In this paper, we propose a new nonlinear exponential adaptive two-dimensional (2-D) filter for texture characterization. The filter adaptive coefficients are updated with the Least Mean Square (LMS) algorithm. The proposed nonlinear model is used for texture characterization with a 2-D Auto-Regressive (AR) adaptive model. The main advantage of the new nonlinear exponential adaptive 2-D filter is the reduced number of coefficients used to characterize the nonlinear image regarding the 2-D second-order Volterra model. Whatever the degree of the nonlinearity, the problem results in the same number of coefficients as in the linear case. The characterization efficiency of the proposed exponential model is compared to the one provided by both 2-D linear and Volterra filters and the cooccurrence matrix method. The comparison is based on two criteria usually used to evaluate the features discriminating ability and the class quantification in characterization techniques. The first criterion is proposed to quantify the classification accuracy based on a weighted Euclidean distance classifier. The second criterion is the characterization degree based on the ratio of ";;;;;;;between-class";;;;;;; variances with respect to ";;;;;;;within-class";;;;;;; variances of the estimated coefficients. Extensive experiments proved that the exponential model coefficients give better results in texture discrimination than several other parametric characterization methods even in a noisy context.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

July-2012

Proceedings, International Conference on Signal Processing, Information Sciences, and their Applications

Cheriet M, Boashash B (eds.)
Book Published - CD, Montreal, Canada
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

August-2008

International Conference on Frontiers on Handwriting Recognition

Suen C Y,Cheriet M
Book Published - CD, Montreal, Canada
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

May-2014

IIGHGINT: A generalization to the modified GHG intensity universal indicator toward a production/consumption insensitive border carbon tax

Authors: Farrahi Moghaddam R., Farrahi Moghaddam F.,Cheriet M
Book Chapter In review - Taxes and the Economy: Government Policies, Macroeconomic Factors and Impacts on Consumption and the Environment, NOVA Science
Funding Resources

CANARIE - GreenStar Network

September-2012

Green Communications for Carbon Emission Reductions: Architectures and Standards

Authors: Despins C, Labeau F, Labelle R,Cheriet M, Leon-Garcia A, Cherkaoui O
Editors: J. Wu, S. Rangan, H. Zhang
Book Chapter Published - Green Communications: Theoretical Fundamentals, Algorithms and Applications, NA, CRC Press
Funding Resources

CANARIE - GreenStar Network

April-2012

Resource Discovery and Allocation in Low Carbon Grid Networks", Communication and Networking in Smart Grids

Authors: Nguyen KK, Daouadji A, Cheriet M, Lemay M O
Book Chapter Published - NA, CRC Press
Funding Resources

CANARIE - GreenStar Network

February-2012

A Robust Word Spotting for Historical Arabic Manuscripts", Guide to OCR for Arabic Scripts

Authors: Cheriet M, Farrahi Moghaddam R
Editors: V. Margner and H. El Abed M, Farrahi Moghaddam R
Book Chapter Accepted - Guide to OCR for Arabic Scripts, 453-484, Springer-Verlag
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

January-2012

Manifold Learning for shape-based recognition of historical Arabic documents

Authors: Cheriet M, Farrahi Moghaddam R, Arabnejad E, Zhong G
Book Chapter Published - Handbook of Statistics, 1st(31), NA, Elsevier
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

January-2011

Provisioning renewable energy for ICT services in a future internet

Authors: Nguyen KK, Cheriet M, Lemay M, Arnaud B.St, Reijs V, Mackarel A, Minoves P, Pastrama A, Heddeghem WV
Book Chapter Published - FIA Book, Future Internet: Achievements and Promising Technology, 6656, 419-429, Springer Verlag
Funding Resources

CANARIE - GreenStar Network

January-2009

Support Vector Machines

Authors: Adankon M M,Cheriet M
Editors: Stan Z. Li and Anil Jain
Book Chapter Published - Encyclopedia of Biometrics, 1303-1308,Springer US
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

January-2009

Unified Framework for SVM model Selection", Hands-on pattern recognition: challenges in data representation, model selection, and performance prediction.(1)

Authors: Adankon M M, Cheriet M
Editors: Isabelle Guyon, Gavin Cawley, Gideon Dror, and Amir Saffari
Book Chapter Published - NA, Microtome Publishing, Pascal Eprints
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

January-2008

Advances in Degradation Modelling and Processing

Authors: Cheriet M, Farrahi Moghaddam R
Book Chapter Published - NA, Microtome Publishing, Pascal Eprints
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

July-2016

Relational Fisher Analysis

Guoqiang Z., Yaxin S and Cheriet M
Conference PapersAccepted - The annual International Joint Conference on Neural Networks (IJCNN) , 24-29 July 2016, Vancouver, Canada
Abstract

Refereed?: Yes

Invited?: Yes

June-2016

Federal Smart House Regulator (FSHR): A Self-Managing and Ecosystemic Approach to Resource Management, Automation, and Sustainability in Smart Houses

Farrahi Moghaddam R., Lemieux Y. Cheriet M
Conference PapersAccepted - SDS society for disability, June 8-11, 2016, Buffalo, NY
Abstract

Refereed?: Yes

Invited?: Yes

October-2015

A Novel Approach to Enabling Sustainable Actions in the Context of Smart House/Smart City Verticals Using Autonomous, Cloud-Enabled Smart Agents

Farrahi Moghaddam R., Lemieux Y., Cheriet M
Conference PapersPublished - EAI International Conference on Smart Sustainable City Technologies (S2CT 2015), October 13-14, 2015, Toronto, Canada.
October-2015

Toward an architectural model for highly-dynamic multi-tenant multi-service cloud-oriented platforms

Titous A., Cheriet M Gherbi A.
Conference PapersPublished - EAI International Conference on Smart Sustainable City Technologies (S2CT 2015), October 13-14, 2015, Toronto, Canada.
October-2015

Session-based Communication for Vital Machine-to-Machine Applications

Arsenault M.-O., Garcia Gamardo H., Nguyen K.-K. Cheriet M
Conference PapersPublished - EAI International Conference on Smart Sustainable City Technologies (S2CT 2015), October 13-14, 2015, Toronto, Canada.
October-2015

Resource Consumption Assessment for Cloud Middleware

Abdelfattah H., Nguyen K.-K., Cheriet M
Conference PapersPublished - EAI International Conference on Smart Sustainable City Technologies (S2CT 2015), October 13-14, 2015, Toronto, Canada.
October-2015

Micro Service Cloud Computing Pattern for Next Generation Networks

Potvin P., Nabaee M., Labeau F., Nguyen K.-K., Cheriet M
Conference PapersPublished - EAI International Conference on Smart Sustainable City Technologies (S2CT 2015), October 13-14, 2015, Toronto, Canada.
October-2015

Hyper Heterogeneous Cloud-based IMS Software Architecture: A Proof-of-Concept and Empirical Analysis

Potvin P., Garcia Gamardo H., Nguyen K.-K., Cheriet M
Conference PapersPublished - EAI International Conference on Smart Sustainable City Technologies (S2CT 2015), October 13-14, 2015, Toronto, Canada.
October-2015

How to address behavioral issues in the environmental assessment of complex system: a case study in smart building

Walzberg J., Dandres T., Cheriet M Samson R.
Conference PapersPublished - EAI International Conference on Smart Sustainable City Technologies (S2CT 2015), October 13-14, 2015, Toronto, Canada.
October-2015

Applications and challenges of life cycle assessment in the context of a green sustainable telco cloud

Dandres T., Farrahi Moghaddam R., Nguyen K.-K., Cheriet MSamson R.
Conference PapersPublished - EAI International Conference on Smart Sustainable City Technologies (S2CT 2015), October 13-14, 2015, Toronto, Canada.
September-2015

Hierarchical Segmentation and Tracking of Coronary Arteries in 2D X-ray Angiography Sequences

M'Hiri F., Ngan-Le T.-H., Duong L., Desrosiers C., Cheriet M
Conference PapersPublished - ICIP/IEEE International Conference on Image Processing (ICIP 2015), September 27-30, 2015, Québec(QC), Canada
Abstract
Coronary arteries (CA) segmentation from an angiographic sequence is essential to guide the cardiologists during percutaneous interventions for the treatment and diagnosis of pathologies. Segmentation of the CA from X-ray angiograms is a very challenging problem due to the changes in contrast in the sequence in addition to the CA's complex topology. In this paper, we propose a hierarchical segmentation method that extends the Vessel Walker model using a temporal prior and multiscale information to extract CA with a higher level of accuracy. In this method, the vessel located in frame It at time t is extracted by utilizing the segmentation result at frame It-1 together with Histogram of Oriented Gradient (HOG) features and a shape matching technique. Our experiments conducted on five paediatric angiograms have shown promising qualitative and quantitative results with a mean Dice coefficient of 64% and 53% Recall and 85% in Precision.

Refereed?: Yes

Invited?: Yes

August-2015

A defense-centric model for multi-step attack damage cost evaluation

Shameli-Sendi A., Louafi H., He W., Cheriet M
Conference PapersPublished - IEEE, The 3rd International Conference on Future Internet of Things and Cloud (FiCloud 2015), August 24-26, 2015, Rome, Italy
Abstract
Measuring the attack damage cost and monitoring the sequence of privilege escalations play a critical role in choosing the right countermeasure by Intrusion Response System (IRS). The existing attack damage cost evaluation approaches inherit some limitations, such as neglecting the dependencies between system assets, ignoring the backward damage of exploited non-goal services, or omitting the potential damage toward the goal service. In this paper, we propose a defense-centric model to calculate the damage cost of a multi-step attack. The main advantage of this model is providing an accurate damage cost by considering not only the damaged services (non-goal services) but also the potential damage toward the attacker target (goal service). To track the attacker's progress and find the attack path, an Attack-Defense Tree (ADT) is used. The model has been implemented in, but is not limited to, the cloud environment and tested with a multi-step attack scenario.

Refereed?: Yes

Invited?: Yes

August-2015

MS-Tex: MultiSpectral Text Extraction Contest

Hedjam R., Ziaei Nafchi H., Farrahi Moghaddam R., Kalacska M. Cheriet M
Conference PapersPublished - 13th International Conference on Document Analysis and Recognition (ICDAR'2015), August 23-26, 2015, Gammarth, Tunisia.
Abstract
he first competition on the MultiSpectral TextExtraction (MS-TEx) from historical document images has beenorganized in conjunction with the ICDAR 2015 conference. Thegoal of this contest is evaluation of the most recent advances intext extraction from historical document images captured by amultispectral imaging system. The MS-TEx 2015 dataset contains10 handwritten and machine-printed historical document imagesalong with eight spectral images for each image. This paperprovides a report on the methodology and performance of thefive submitted algorithms by various research groups across theworld. The objective evaluation and ranking was performedby using well-known evaluation metrics of binarization andclassification.

Refereed?: Yes

Invited?: Yes

August-2015

A Multiple-Expert Binarization Framework for Multispectral Images

Farrahi Moghaddam R., Cheriet M
Conference PapersPublished - 13th International Conference on Document Analysis and Recognition (ICDAR'2015), August 23-26, 2015, Gammarth, Tunisia.
Abstract
In this work, a multiple-expert binarization framework for multispectral images is proposed. The framework is based on a constrained subspace selection limited to the spectral bands combined with state-of-the-art gray-level binarization methods. The framework uses a binarization wrapper to enhance the performance of the gray-level binarization. Nonlinear preprocessing of the individual spectral bands is used to enhance the textual information. An evolutionary optimizer is considered to obtain the optimal and some suboptimal 3-band subspaces from which an ensemble of experts is then formed. The framework is applied to a ground truth multispectral dataset with promising results. In addition, a generalization to the cross-validation approach is developed that not only evaluates generalizability of the framework, it also provides a practical instance of the selected experts that could be then applied to unseen inputs despite the small size of the given ground truth dataset.

Refereed?: Yes

Invited?: Yes

june-2015

Multistage OCDO: Scalable Security Provisioning Optimization in SDN-Based Cloud

Jarraya, Y.; Shameli-Sendi, A.; Pourzandi, M.; Cheriet M
Conference PapersPublished - The IEEE 8th International Conference on ,New York City, NY , 2015, June 27 2015-July
Abstract
Cloud computing is increasingly changing the landscape of computing, however, one of the main issues that is refraining potential customers from adopting the cloud is the security. Network functions virtualization together with software-defined networking can be used to efficiently coordinate different network security functionality in the network. To squeeze the best out of network capabilities, there is need for algorithms for optimal placement of the security functionality in the cloud infrastructure. However, due to the large number of flows to be considered and complexity of interactions in these networks, the classical placement algorithms are not scalable. To address this issue, we elaborate an optimization framework, namely OCDO, that provides adequate and scalable network security provisioning and deployment in the cloud. Our approach is based on an innovative multistage approach that combines together decomposition and segmentation techniques to the problem of security functions placement while coping with the complexity and the scalability of such an optimization problem. We present the results of multiple scenarios to assess the efficiency and the adequacy of our framework. We also describe our prototype implementation of the framework integrated into an open source cloud framework, i.e. Open stack.

Refereed?: Yes

Invited?: Yes

june-2015

The generation of synthetic handwritten data for improving on-line learning

Reznakova M., Tencer L., Plamondon R. Cheriet M
Conference PapersPublished - 17th Conference of the International Graphonomics Society (IGS 2015), June 21-24, 2015, University of the French West-indies, Point-à-Pitre, Guadeloupe
s
Abstract
In this paper, we introduce a framework for on-line learning of handwritten symbols from scratch. As such, learning suffers from missing data at the beginning of the learning process, in this paper we propose the use of Sigma-lognormal model to generate synthetic data. Our framework deals with a real-time use of the system, where the recognition of a single symbol cannot be postponed by the generation of synthetic data. We evaluate the use of our framework and Sigma-lognormal model by comparison of the recognition rate to a block-learning and learning without any synthetic data. Experimental results show that both of these contributions represent an enhancement to the on-line handwriting recognition, especially when starting from scratch.

Refereed?: Yes

Invited?: Yes

june-2015

An incremental Approach towards Online Sketch Recognition

Tencer L., Reznakova M., Cheriet M
Conference PapersPublished - 17th Conference of the International Graphonomics Society (IGS 2015), June 21-24, 2015, University of the French West-indies, Point-à-Pitre, Guadeloupe
Abstract
In this paper, we present a novel method for recognition of handwritten sketches. Unlike previous approaches, we focus on online retrieval and ability to build our model incrementally, thus we do not need to know all the data in advance and we can achieve very good recognition results after as few as 15 samples. The method is composed of two main parts: feature representation and learning and recognition. In feature representation part, we utilize SIFT-like feature descriptors in combination with soft response Bag-of-Words techniques. Descriptors are extracted locally using our novel sketch-specific sampling strategy and for support regions we follow patch-based approach. For learning and recognition, we use a novel technique based on fuzzy-neural networks, which has shown good performance in incremental learning. The experiments on stateof-the-art benchmarks have shown promising results.

Refereed?: Yes

Invited?: Yes

june-2015

Universum Learning for Semi-Spervised Signature Recognition from Spatio-Temporal Data

Tencer L., Reznakova M., Cheriet M
Conference PapersPublished - 17th Conference of the International Graphonomics Society (IGS 2015), June 21-24, 2015, University of the French West-indies, Point-à-Pitre, Guadeloupe
Abstract
We present a novel approach towards signature recognition from spatio-temporal data. The data is obtained by recording gyroscope and accelerometer measurements from an embedded pen device. The idea of Universum learning was previously presented by Vapnik and recently popularized in machine learning community. It assumes that the decision boundary of a classifier lies close to data with high uncertainty. The quality of the final classifier strongly depends on a way how to choose the Universum data and also on the representation of original data. In our paper we use a novel approach of Universum learning to classify signature data, also we present our novel idea how to sample the Universum data. At last, we also find more effective representation of the signature data itself compared to the baseline method. These three novelties allow us to outperform previously published results by 4.89% / 5.58%.<

Refereed?: Yes

Invited?: Yes

May-2015

Side information based Expnential Discriminant Analysis for Face verification in the Wild

Ouamane A., Bengherabi M., Hadid A., Cheriet M
Conference PapersPublished - The International Workshop on Biometrics is the Wild (B-Wild 2015) held in conjunction with 11th IEEE International Conference on Automatic Face Gesture Recognition (IEEE FG 2015), May 8, 2015, Ljubljana, Slovenia,
Abstract
Recently, there is an extensive research efforts devoted to the challenging problem of face verification in unconstrained settings and weakly labeled data, where the task is to determine whether pairs of images are from the same person or not. In this paper, we propose a novel discriminative dimensionality reduction technique called Side-Information Exponential Discriminant Analysis (SIEDA) which inherits the advantages of both Side-Information Linear Discriminant (SILD) and Exponential Discriminant Analysis (EDA). SIEDA transforms the problem of face verification under weakly labeled data into a generalized eigenvalue problem while alleviating the preprocessing step of PCA dimensionality reduction. To further boost the performance, the multi-scale variant of the binarized statistical image features histograms are adopted for efficient and rich facial texture representation. Extensive experimental evaluation on the challenging Labeled Faces in the Wild LFW benchmark database demonstrates the superiority of SIEDA over SILD. Moreover, the obtained verification accuracy is impressive and compares favorably against the state-of-the-art.

Refereed?: Yes

Invited?: Yes

May-2015

Optimal Placement of Sequentially Ordered Virtual Security Appliances in the Cloud

Shameli-Sendi A., Jarraya Y., Fekih-Ahmed M., Pourzandi M., Talhi C., Cheriet M
Conference PapersPublished - The 14th IFIP/IEEE Symposium on Integrated Network and Service Management (IM 2015), May 11-15, 2015, Ottawa, Ontario, Canada.
Abstract
Traditional enterprise network security is based on the deployment of security appliances placed on some specific locations filtering, monitoring the traffic going through them. In this perspective, security appliances are chained in specific order to perform different security functions on the traffic. In the cloud, the same approach is often adopted using virtual security appliances to protect traffic for different virtual applications with the challenge of dealing with the flexible and elastic nature of the cloud. In this paper, we investigate the problem of placing virtual security appliances within the data center in order to minimize network latency and computing costs for security functions while maintaining the required sequential order of traversing virtual security appliances. We propose a new algorithm computing the best place to deploy these virtual security appliances in the data center. We further integrated our placement algorithm in an open source cloud framework, i.e. Openstack, in our test laboratory. The preliminary results show that we are placing the virtual security appliances in the required sequential order while improving the efficiency compared to the current default placement algorithm in Openstack.

Refereed?: Yes

Invited?: Yes

Refereed?: Yes

Invited?: Yes

May-2015

Towards Flexible, Scalable and Autonomic Virtual Tenant Slices

Fekih-Ahmed M., Talhi C., Cheriet M
Conference PapersPublished - The 14th IFIP/IEEE Symposium on Integrated Network and Service Management (IM 2015), May 11-15, 2015, Ottawa, Ontario, Canada.
Abstract
Multi-tenant flexible, scalable and autonomic virtual networks isolation has long been a goal of the network research and industrial community. For today's Software-Defined Networking (SDN) platforms, providing cloud tenants requirements for scalability, elasticity, and transparency is far from straightforward. SDN programmers typically enforce strict and inflexible traffic isolation resorting to low-level encapsulations mechanisms which help and facilitate network programmer reasoning about their complex slices behavior. In this paper, we propose SD-NMS, a novel software-defined architecture overcoming SDN and encapsulation techniques limitations. SD-NMS lifts several network virtualization roadblocks by combining these two separate approaches into an unified design. SD-NMS design leverages the benefits of SDN to provide Layer 2 (L2) isolation coupled with network overlay protocols with simple and flexible virtual tenant slices abstractions. This yields a network virtualization architecture that is both flexible, scalable and secure on one side, and self-manageable on the other. The experiment results showed that the proposed design offers negligible overhead and guarantees the network performance while achieving the desired isolation goals.

Refereed?: Yes

Invited?: Yes

March-2015

SmartPacket: ReDistributing the Routing Intelligence among Network Components in SDNs

Farrahi Moghaddam R., Cheriet M
Conference PapersPublished - in the Proceedings of 2015 IEEE International Conference on Cloud Engineering (IC2E 2015), March 9-13, 2015, Tempe(AZ), USA
Abstract
Redistribution of the intelligence and management in the software defined networks (SDNs) is a potential approach to address the bottlenecks of scalability and integrity of these networks. We propose to revisit the routing concept based on the notion of regions. Using basic and consistent definition of regions, a region-based packet routing called SmartRegion Routing is presented. The flexibility of regions in terms of naming and addressing is then leveraged in the form of a region stack among other features placed in the associated packet header. In this way, most of complexity and dynamicity of a network is absorbed, and therefore highly fast and simplified routing at the inter-region level along with semi-autonomous intra-region routing will be feasible. In addition, multipath planning can be naturally realized at both inter and intra levels. A basic form of SmartRegion routing mechanism is provided. Simplicity, scalability, and manageability of the proposed approach would also bring future potentials to reduce energy consumption and environmental footprint associated to the SDNs. Finally, various applications, such as enabling seamless broadband access, providing beyond IP addressing mechanisms, and also address-equivalent naming mechanisms, are considered and discussed.

Refereed?: Yes

Invited?: Yes

October-2014

Multi-Scale Multi-Descriptor Local Binary Features And Exponential Discriminant Analysis For Robust Face Authentication

Ouamane A., Bengherabi M., Guessoum A., Hadid A. Cheriet M
Conference PapersPublished - IEEE International Conference on Image Processing, October 27-30, 2014, Paris, France
October-2014

Context-dependent BLSTM Models, Application to Offline Handwriting Recognition

Chherawala Y., Roy P.-P. Cheriet M
Conference PapersPublished - 21st IEEE International Conference Image Processing (ICIP 2014), October 1-4, 2014, Paris, France
September-2014

Deep-Belief-network based Rescoring Approach for Handwritten Word Recognition

Chherawala Y., Roy P.-P. Cheriet M
Conference PapersPublished - 14th International Conference on Frontiers in Handwritting Recognition (ICFHR 2014), September 1-4, 2014, Crete Island, Greece.
September-2014

Gabor Filters for Degraded Document Image Binarizationn

Sehad A., Chibani Y. and Cheriet M
Conference PapersPublished - 14th International Conference on Frontiers in Handwriting Recognition (ICFHR 2014 ), September 1-4, 2014, Crete Island, Greece
September-2014

An active contour based method for image binarization: application to degraded historical document images

Hadjadj Z., Meziane A. Cheriet M and Y. Cherfa
Conference PapersPublished - 14th International Conference on Frontiers in Handwritting Recognition (ICFHR 2014), September 1-4, 2014, Crete Island, Greece
August-2014

Constrainted Energy Maximization and Self-Referencing Method for Invisible Ink Detection from Multispectral Historical Document Images

Hedjam R., Cheriet M, Kalacska M.
Conference PapersPublished - 22nd International Conference on Pattern Recognition (ICPR 2014), August 24-28, 2014, Stockholm, Sweden
August-2014

Framework for monitoring and optimizing environmental impacts in information and communication technology systems

Dandres T., Samsom R., Farrahi Moghaddam R., Cheriet M, Lemieux Y.
Conference PapersPublished - 2nd International Conference on ICT for Sustainability (ICT4S 2014), August 24-27, 2014, Stockholm, Sweden
August-2014

Minimization of telco cloud emissions based on marginal electricity management

Dandres T., Farrahi Moghaddam R., Nguyen K.-K., Lemieux Y., Cheriet M and Samson R.
Conference PapersPublished - 2nd International Conference on ICT for Sustainability (ICT4S 2014), August 24-27, 2014, Stockholm, Sweden
August-2014

Challenges and complexities in application of LCA approaches in the case of ICT for a sustainable future

Farrahi Moghaddam R., Farrahi Moghaddam F., Dandres T., Lemieux Y., Samson R and Cheriet M
Conference PapersPublished - 2nd International Conference on ICT for Sustainability (ICT4S 2014), August 24-27, 2014, Stockholm, Sweden
August-2014

Sustainable Broadband Provisioning in Next-Generation Smart University Campus

Nguyen K.-K., Cheriet M
Conference PapersPublished - 2nd International Conference on ICT for Sustainability (ICT4S 2014), August 24-27, 2014, Stockholm, Sweden
August-2014

Life cycle assessment of videoconferencing with call management servers relying on virtualization

Vandromme N., Dandres T., Samson R., Khazri S., Farrahi Moghaddam R., Nguyen K.-K., Cheriet M and Lemieux Y.
Conference PapersPublished - 2nd International Conference on ICT for Sustainability (ICT4S 2014), August 24-27, 2014, Stockholm, Sweden
August-2014

Reducing the carbon footprint of a cloud computing service with a predictive dynamic model

Maurice E., Dandres T., Samson R., Farrahi Moghaddam R., Nguyen K.-K., Cheriet M and Lemieux Y.
Conference PapersPublished - 2nd International Conference on ICT for Sustainability (ICT4S 2014), August 24-27, 2014, Stockholm, Sweden
May-2014

Carbon footprint reduction of a cloud computing service using a predictive dynamic LCA model

Maurice E., Dandres T., Farrahi Moghaddam R., Nguyen K., Lemieux Y., Cheriet M Samson R.
Conference PapersPublished - Society and Materials International Conference (SAM8), May 21-22, 2014, Liège, Belgium
May-2014

Partial Energy Model with Consequential Life Cycle Analysis in order to evaluate the long-term Impacts of the implantation of Data-centers in Canada

Vandromme N., Dandres T., Obrekht G., Wong A., Farrahi Moghaddam R., Nguyen K.-K., Lemieux Y. Cheriet M Samson R.
Conference PapersPublished - Society and Materials International Conference (SAM8), May 21-22, 2014, Liège, Belgium
March-2014

A Risk Assessment Model

Shameli-Sendi A. and Cheriet M
Conference PapersPublished - IEEE International Conference on Cloud Engineering (IC2E 2014), March 10-14, 2014, Boston(MA), USA
March-2014

Software-Defined Scalable and Autonomous Architecture for Multi-tenancy

Fekih Ahmed M., Pourzandi M., Talhi C. and Cheriet M
Conference PapersPublished - IEEE International Conference on Cloud Engineering (IC2E 2014), March 10-14, 2014, Boston(MA), USA
March-2014

Software-Defined Scalable and Autonomous Architecture for Multi-tenancy

Fekih Ahmed M., Pourzandi M., Talhi C. and Cheriet M
Conference PapersPublished - IEEE International Conference on Cloud Engineering (IC2E 2014), March 10-14, 2014, Boston(MA), USA
March-2014

Binarization of degradee historical document images

Hadjadj Z., Cheriet M and Meziane A.
Conference PapersPublished - International Conference on Artificial Intelligence and Information Technology (ICAIIT 2014), March 10-12, 2014, Ouargla, Algeria.
November-2013

Visual Language Processing (VLP) of Ancient Manuscripts: Converting Collections to Windows on the Past Paper

Cheriet M, Farrahi Moghaddam R, Hedjam R
Conference PapersPublished - The 7th IEEE GCC Conference and Exhibition (GCC), Qatar, Doha, 2013-11-18
Abstract
Ancient manuscripts constitute a primary carrier of cultural heritage globally, and they are currently being intensively digitized all over the world to ensure their preservation, and, ultimately, the wide accessibility of their content. Critical to this research process are the legibility of the documents in image form, and access to live texts. Several state-of-the-art methods and approaches have been proposed and developed to address the challenges associated with processing these manuscripts. However, there is a huge amount of data involved, and also the high cost and scarcity of human expert feedback and reference data call for the development of fundamental approaches that encompass all these aspects in an objective and tractable manner. In this paper, we propose one such approach, which is a novel framework for the computational pattern analysis of ancient manuscripts that is data-driven, multilevel, self-sustaining, and learning-based, and takes advantage of the large quantities of unprocessed data available. Unlike many approaches, which fast-forward to the processing and analysis of feature vectors, our innovative framework represents a new perspective on the task, which starts from ground zero of the problem, which is the definition of objects. In addition, it leverages the data-driven mining of relations among objects to discover hidden but persistent links between them. The problem is addressed at three main levels. At the lowest level, that of images, it tackles automatic, data-driven enhancement and restoration of document images using spatial, spectral, sparse, and graph-based representations of visual objects. At the second level, which is transliteration, directed graphical models, HMMs, Undirected Random Fields, and spatial relations models are used to extract the live text of manuscript images, which reduces dependency on human experts. Finally, at the highest level, that of network analysis of the relations among objects (from patches and words- to manuscripts and writers) involves the search for `social networks' linking manuscripts. Considering this approach under the umbrella of Visual Language Processing (VLP), we hope that it will be further enriched by the research community, in the form of new insights and approaches contributed at the various levels.

Refereed?: Yes

Invited?: Yes

August-2013

Adaptive error-correcting output codes

Guoqiang Zhong , Cheriet Mohamed
Conference PapersPublished - Proceedings IJCAI 2013, , Beijing, China, 2013-08-03
Abstract
Error-correcting output codes (ECOC) are a successful technique to combine a set of binary classifiers for multi-class learning problems. However, in traditional ECOC framework, all the base classifiers are trained independently according to the defined ECOC matrix. In this paper, we reformulate the ECOC models from the perspective of multi-task learning, where the binary classifiers are learned in a common subspace of data. This novel model can be considered as an adaptive generalization of the traditional ECOC framework. It simultaneously optimizes the representation of data as well as the binary classifiers. More importantly, it builds a bridge between the ECOC framework and multitask learning for multi-class learning problems. To deal with complex data, we also present the kernel extension of the proposed model. Extensive empirical study on 14 data sets from UCI machine learning repository and the USPS handwritten digits recognition application demonstrates the effectiveness and efficiency of our model.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

August-2013

Unsupervised ensemble of experts (EoE) framework for automatic binarization of document images

Farrahi Moghaddam R , Farrahi Moghaddam F , Cheriet M
Conference PapersPublished - Proceedings The 12th International Conference on Document Analysis and Recognition (ICDAR 2013), , Washington DC, USA, 2013-08-25
Abstract
In recent years, a large number of binarization methods have been developed, with varying performance generalization and strength against different benchmarks. In this work, to leverage on these methods, an ensemble of experts (EoE) framework is introduced, to efficiently combine the outputs of various methods. The proposed framework offers a new selection process of the binarization methods, which are actually the experts in the ensemble, by introducing three concepts: confidentness, endorsement and schools of experts. The framework, which is highly objective, is built based on two general principles: (i) consolidation of saturated opinions and (ii) identification of schools of experts. After building the endorsement graph of the ensemble for an input document image based on the confidentness of the experts, the saturated opinions are consolidated, and then the schools of experts are identified by thresholding the consolidated endorsement graph. A variation of the framework, in which no selection is made, is also introduced that combines the outputs of all experts using endorsement-dependent weights. The EoE framework is evaluated on the set of participating methods in the H-DIBCO'12 contest and also on an ensemble generated from various instances of grid-based Sauvola method with promising performance.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

August-2013

An empirical evaluation of supervised dimensionality reduction for recognition

Guoqiang Zhong, Chherawala Y , Cheriet M
Conference PapersPublished - The 12th International Conference on Document Analysis and Recognition (ICDAR 2013), , Washington DC, USA, 2013-08-25
Abstract
In the literature, many dimensionality reduction methods have been proposed and applied to recognition tasks, including handwritten digits recognition, character recognition and string recognition. However, it is usually difficult for the researchers to decide which method is the optimal choice for the problem at hand. In this paper, we empirically compare some supervised dimensionality reduction methods on handwritten digits recognition, English letter recognition and ancient Arabic sub word recognition, to evaluate their performance on the recognition tasks. These compared methods include traditional linear dimensionality reduction approach (linear discriminant analysis, LDA), locality-based manifold learning approach (marginal Fisher analysis, MFA) and relational learning approach (probabilistic relational principal component analysis, PRPCA). Experimental results and statistical tests show that locality-based manifold learning approach (MFA) generally performs well in terms of recognition accuracy, but with high computational complexity, traditional linear dimensionality reduction approach (LDA) is efficient, but not necessarily to deliver the best result, relational learning approach (PRPCA) is promising, and more efforts should be dedicated to this area.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

August-2013

An Efficient Ground Truthing Tool for Binarization of Historical Manuscripts

Nafchi HZ, Ayatollahi SM, Moghaddam RF , Cheriet M
Conference PapersPublished - The 12th International Conference on Document Analysis and Recognition (ICDAR 2013), , Washington DC, USA, 2013-08-26
Abstract
For the purpose of facilitating benchmark contributions for binarization methods, a new fast ground truthing approach, called the PhaseGT, is proposed. This approach is used for building the 1st groundtruthed Persian Heritage Image Binarization Dataset (PHIBD 2012). The PhaseGT is a semiautomatic approach to ground truthing of images of any language, especially designed for historical document images. The main goal of the PhaseGT is to accelerate the ground truthing process and reduce the manual ground truthing effort. It uses the phase congruency features to preprocess the input image and to provide a more accurate initial binarization to the human expert who performs the manual part. This preprocessing is in turn based on a priori knowledge that is provided by human user. The PHIBD 2012 dataset contains 15 historical document images with their corresponding ground truth binary images. The historical images in the dataset suffer from various types of degradation. It has been also divided into two subsets of training and testing images for those binarization methods that use learning approaches.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

August-2013

Feature design for offline Arabic handwriting recognition: handcrafted vs automated

Chherawala Y, Roy PP , Cheriet M
Conference PapersPublished - Proceedings The 12th International Conference on Document Analysis and Recognition (ICDAR 2013), , Washington DC, USA, 2013-08-24
Abstract
In handwriting recognition, design of relevant feature is a very important but daunting task. On one hand, handcraft design of features is difficult, depending on expert knowledge and on heuristics. On the other hand, biologically inspired neural networks are able to learn automatically features from the input image, but requires a good underlying model. The goal of this paper is to evaluate the performance of automatically learned features compared to handcrafted features, as they provide a promising alternative to the difficult task of features handcrafting. In this work, the recognition model is based on the long short-term memory (LSTM) and connectionist temporal classification (CTC) neural networks. This model has been shown to outperform the well-known HMM model for various handwriting tasks, thanks to its reliable probabilistic modeling. In its multidimensional form, called MDLSTM, this network is able to automatically learn features from the input image. For evaluation, we compare the MDLSTM learned features and four state-of-the-art handcrafted features. The IFN/ENIT database has been used as benchmark for Arabic word recognition, where the results are promising.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

August-2013

Sketch-based Retrieval of Document Illustrations and Regions of Interest

Tencer L, Reznakova M , Cheriet M
Conference PapersPublished - Proceedings The 12th International Conference on Document Analysis and Recognition (ICDAR 2013), , Washington DC, USA, 2013-08-26
Abstract
In this paper we present a novel approach towards retrieval of documents with pictorial data. Many prior works focused on word-spotting and text-based retrieval, but none of these techniques handled the retrieval of pictorial part of documents. In this paper, we present a new method that allows users to retrieve any visual data from documents, based on a sketch example. It mainly emphasizes all three main aspects of a visual retrieval system: feature representation, indexing and retrieval. Especially we focus on the design of salient descriptors, capable of capturing unique mapping from sketched images to document illustrations. We evaluate several approaches towards feature representation and indexing, with the aim to maximize the performance of our method. Our proposed technique is highly useful to complement word-spotting technique, when the indexed documents are composed of mixture of visual and textual data. This technique has shown promising results, both on pictorial data automatically extracted from documents as well for those selected by users as regions of interest.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

August-2013

ARTIST: ART-2A driven Generation of Fuzzy Rules for Online Handwritten Gesture Recognition

Reznakova M, Tencer L , Cheriet M
Conference PapersPublished - Proceedings The 12th International Conference on Document Analysis and Recognition (ICDAR 2013), , Washington DC, USA, 2013-08-26
Abstract
Incremental learning, especially when learning from a scratch, has a lot of interest for online gesture recognition. However the lack of learning examplers combined to low computational cost suggests building robust and efficient learning machines. In this paper we introduce a hybrid model of ART-2A neural network combined to Takagi-Sugeno (TS) neuro-fuzzy network. The latter model is applied for online handwritten gesture recognition, when the learning is starting from scratch and no class information, such as gesture type or number of classes, is predefined. Moreover, using ART-2A neural network and our novel distance measure, the computational complexity of the whole model decreases while preserving high accuracy. Furthermore, we exploit the forgetting dilemma of online learning by introducing a competitive Recursive Least Squares method for TS models. Together, all the modeling has shown promising results.
Funding Sources

Social Sciences and Humanities Research Council of Canada-IOW

November-2012

Historical Document Binarization Based on Phase Information of Images

Nafchi H Z, Farrahi Moghaddam R , Cheriet M
Conference PapersPublished - Proceedings, 1-12 The 11th Asian Conference on Computer Vision, , Daejeon, South Korea, 2012-11-05
Abstract
In this paper, phase congruency features are used to develop a binarization method for degraded documents and manuscripts. Also, Gaussian and median filtering are used in order to improve the final binarized output. Gaussian filter is used for further enhance the output and median filter is applied to remove noises. To detect bleed-through degradation, a feature map based on regional minima is proposed and used. The proposed binarization method provides output binary images with high recall values and competitive precision values. Promising experimental results obtained on the DIBCO’09, H-DIBCO’10 and DIBCO’11 datasets, and this shows the robustness of the proposed binarization method against a large number of different types of degradation.
Funding Resources

Social Sciences and Humanities Research Council of Canada-IOW

November-2012

Cognitive Behavior Analysis framework for Fault Prediction in Cloud Computing

Farrahi Moghaddam R, Farrahi Moghaddam F, Asghari V , Cheriet M
Conference PapersPublished - Proceedings, 1-8 The 3rd International Conference on the Network of the Future (NoF 2012), , Tunis, Tunisia, 2012-11-21
Abstract
Complex computing systems, including clusters, grids, clouds and skies, are becoming the fundamental tools of green and sustainable ecosystems of future. However, they can also pose critical bottlenecks and ignite disasters. The complexity and high number of variables could easily go beyond the capacity of any analyst or traditional operational research paradigm. In this work, we introduce a multi-paradigm, multi-layer and multi-level behavior analysis framework which can adapt to the behavior of a target complex system. It not only learns and detects normal and abnormal behaviors, it could also suggest cognitive responses in order to increase the system resilience and its grade. The multi-paradigm nature of the framework provides a robust redundancy in order to cross-cover possible hidden aspects of each paradigm. After providing the high-level design of the framework, three different paradigms are discussed. We consider the following three paradigms: Probabilistic Behavior Analysis, Simulated Probabilistic Behavior Analysis, and Behavior-Time Profile Modeling and Analysis. To be more precise and because of paper limitations, we focus on the fault prediction in the paper as a specific event-based abnormal behavior. We consider both spontaneous and gradual failure events. The promising potential of the framework has been demonstrated using simple examples and topologies. The framework can provide an intelligent approach to balance between green and high probability of completion (or high probability of availability) aspects in computing systems.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -CRD-Ericsson

August-2012

New Measure for 2D PCA-based Face Verification

Sehad A, Chibane Y , Cheriet M
Conference PapersPublished - Proceedings 2012 International Conference on Advances in Computing, Communications and Informatics (ICACCI-2012), , Chennai, India, 2012-08-03
Abstract
Two-dimensional principal component analysis (2DPCA) is based on the 2D images rather than 1D vectorized images like PCA, which is a classical feature extraction technique in face recognition. Many 2DPCA-based face recognition approaches pay a lot of attention to the feature extraction, but fail to pay necessary attention to the classification measures. The typical classification measure used in 2DPCA-based face recognition is the sum of the Euclidean distance between two feature vectors in a feature matrix, called distance measure (DM). However, this measure is not compatible with the high-dimensional geometry theory. So a new classification measure compatible with high-dimensional geometry theory and based on matrix volume is developed for 2DPCA-based face recognition. To assess the performance of 2DPCA with the volume measure (VM), experiments were performed on two famous face databases, i.e. Yale and FERET, and the experimental results indicate that the proposed 2DPCA + VM can outperform the typical 2DPCA + DM and PCA in face recognition.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

July-2012

A New Adaptive Framework for Tubular Structures Segmentation in X-ray Angiography

Mhiri F, Duong L , Cheriet M
Conference PapersPublished - Proceedings, 496 - 500 11th International Conference on Information Sciences, Signal Processing and their Applications: Special Sessions (ISSPA2012), , Montreal, Canada, 2012-07-02
Abstract
Segmentation of tubular structures in X-ray angiographies, such as the aorta or coronary arteries, is a critical task for the guidance of the heart surgeon during percutaneous cardiac interventions. Extracting these structures is challenging due to the quality of X-ray angiographies (presence of noise, non-homogeneous regions) and to the characteristics of the structure of interest (tubular and fine structures ). To overcome the shortcomings of the conventional local and global methods, we propose in this paper, a coarse to fine segmentation framework to extract tubular structures from x-ray angiographies. The framework first enhances tubular structures by a vesselness filter. Then, the structures are segmented by the proposed adaptive active contour method which combines a local and global fitting energy. Those two forces are weighted according to the image's homogeneity value. Experiments have been conducted on different angiograms acquired on children. The results have shown that the proposed approach gives promising results in the segmentation of tubular structures and outperforms other active contour methods. Thanks to the combination of local and global forces, the proposed system is robust to noise and intensity inhomogeneity.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

July-2012

Online Handwritten Gesture Recognition based on Takagi-Sugeno Fuzzy Models Paper

Raznakova M, Tencer L , Cheriet M
Conference PapersPublished - Proceedings, 1247 - 1252 11th International Conference on Information Sciences, Signal Processing and their Applications: Special Sessions (ISSPA2012), , Montreal, 2012-07-02
Abstract
In this paper, we present a new method for incremental online handwritten gesture recognition based on fuzzy rules. This approach allows starting from a scratch with no previously learned classes and adding new ones lifelong. Unlike methods based on evolving mountain clustering, our approach suits incremental concept better. We introduce a new method for evolving clustering and usage of incremental density measurement for determining the membership function which significantly improves the results. Density measurement as membership function allows using only few parameters instead of the costly covariance matrices and does not require any estimating by averaging and thus preventing from information lost. We also introduce a new set of features based on a shape of gestures. Combination of these new system characteristics thus lowers memory and computational requirements while significantly increasing recognition rate.
Funding Resources

Social Sciences and Humanities Research Council of Canada-IOW

July-2012

Automated Intrusion Attack with Permanent Control: Analysis and Countermeasures

Gadhgadhi R, Nguyen K K , Cheriet M
Conference PapersPublished - Proceedings, 1440 - 1441 11th International Conference on Information Sciences, Signal Processing and their Applications: Special Sessions (ISSPA2012), , Montreal, Canada, 2012-07-02
Abstract
We investigate an intrusion attack called automated intrusion which persists the control of victim system for long time. This attack combines intrusion techniques, encodings schemes and social engineering methods to enhance hacking capability. Experiments in local and wide-area networks show how this type of attack can penetrate a computer system and take control over multiple OSes. We then propose few countermeasures in order to protect networks and keep users informed of the attacks.
Funding Resources

CANARIE - GreenStar Network

July-2012

A New Framework for Online Sketch-Based Image Retrieval in a Web Environment

Tencer L, Raznakova M , Cheriet M
Conference PapersPublished - Proceedings, 1430 - 1431 11th International Conference on Information Sciences, Signal Processing and their Applications: Special Sessions (ISSPA2012), , Montreal, 2012-07-02
Abstract
We present a novel framework for retrieval of images, based on the user's sketches performed in web environment. Unlike previous approaches, our method is capable of online retrieval and provides balanced trade-off between computational cost and robustness, while still preserving local properties. In our approach novel combination of features is introduced, which describes properties of desired results and query images. Based on the extracted features we use nearest neighbor recognizer, trained on dynamic neighborhoods, in combination with k-means and k-D tree technique for further speedup. A novel technique for online retrieval, based on sequential input processing and partial hierarchical score evaluation, is introduced, allowing us to suggest on the fly entries based on in-progress sketch. We tested our method on small and large scale databases and achieved promising results. The solution itself is implemented in collaborative environment, what allows users to produce queries cooperatively.
Funding Resources

Social Sciences and Humanities Research Council of Canada-IOW

July-2012

Image Patches Analysis for Text Block Identification

Zhong G , Cheriet M
Conference PapersPublished - Proceedings, 1241 - 1246 11th International Conference on Information Sciences, Signal Processing and their Applications: Special Sessions (ISSPA2012), , 2012-07-02
Abstract
In this paper, we propose a novel text block identification method for ancient document understanding. Unlike traditional top-down and bottom-up approaches, our method is based on supervised learning on the patches of document images, which can be considered as an intermediate level method but integrates essential advantages of both the top-down and the bottom-up strategies. In our method, the document images are firstly partitioned into small patches, and then positive and negative patches are selected to form an active training set. Gabor features are extracted on each patch, while multi-linear discriminant analysis (MDA) is employed to reduce the dimensionality of the data. To deal with unseen documents, a random forest classifier is learned on the new representations of the patches. Compared to traditional approaches, our method can not only capture local texture features of each patch, but also preserve the global information of the training images. Furthermore, MDA is guaranteed to learn a low dimensional tensor subspace, which significantly avoids the curse of dimensionality dilemma. Moreover, the random forest classifier can automatically select useful features and deliver satisfactory identification results. Extensive experiments on some scripts of ancient document images demonstrated the effectiveness of our method.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

July-2012

Sparse Descriptor for Lexicon Reduction in Handwritten Arabic Documents

Chherawala Y, Wisnovsky R , Cheriet M
Conference PapersPublished - Proceedings, 3729- 3732 11th International Conference on Pattern Recognition (ICPR2012), Japan, Tsukuba, Japan, 2013-07-03
Abstract
Arabic words have a rich structure. They are made of subwords (groups of connected letters) and diacritical marks (dots). This paper proposes a sparse descriptor specifically designed for lexicon reduction in handwritten Arabic documents. The topological and geometrical features of subwords are extracted from the skeleton image, based on the concept of local density. The sparse descriptor is then formed as a 3-bins histogram, describing the distribution of the skeleton pixels' local density (low, medium or high). This descriptor is then extended to the Arabic word descriptor (AWD), which combines information from all the subwords and diacritics of an Arabic word. This approach is easy to implement and has only one free parameter. It has been evaluated on the Ibn Sina and IFN/ENIT databases with promising results.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

July-2012

A Prototype System for Handwritten Sub-Word Recognition: Toward Arabic-Manuscript Transliteration

Farrahi Moghaddam R , Cheriet M , Milo T, Wisnovsky R
Conference PapersPublished - Proceedings, 1198 - 1204 11th International Conference on Information Sciences, Signal Processing and their Applications (ISSPA2012),, , Montreal, 2012-07-02
Abstract
A prototype system for the transliteration of diacritics-less Arabic manuscripts at the sub-word or part of Arabic word (PAW) level is developed. The system is able to read sub-words of the input manuscript using a set of skeleton-based features. A variation of the system is also developed which reads archigraphemic Arabic manuscripts, which are dot-less, into archigraphemes transliteration. In order to reduce the complexity of the original highly multiclass problem of sub-word recognition, it is redefined into a set of binary descriptor classifiers. The outputs of trained binary classifiers are combined to generate the sequence of sub-word letters. SVMs are used to learn the binary classifiers. Two specific Arabic databases have been developed to train and test the system. One of them is a database of the Naskh style. The initial results are promising. The systems could be trained on other scripts found in Arabic manuscripts.
Funding Sources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

July-2012

Incorporating User Specific Normalization in Multimodal Biometric Fusion System

Bengherabi M, Harizi F , Cheriet M
Conference PapersPublished - Proceedings, 466 - 471 11th International Conference on Information Sciences, Signal Processing and their Applications: Special Sessions (ISSPA2012), , Montreal, 2012-07-02
Abstract
The aim of this paper is to investigate the user-specific two-level fusion strategy in the context of multimodal biometrics. In this strategy, a client-specific score normalization procedure is applied firstly to each of the system outputs to be fused. Then, the resulting normalized outputs are fed into a common classifier. The logistic regression, non-confidence weighted sum and the likelihood ratio based on Gaussian mixture model are used as back-end classifiers. Three client-specific score normalization procedures are considered in this paper, i.e. Z-norm, F-norm and the Model-Specific Log-Likelihood Ratio MSLLR-norm. Our first findings based on 15 fusion experiments on the XM2VTS score database show that when the previous two-level fusion strategy is applied, the resulting fusion classifier outperforms the baseline classifiers significantly and a relative reduction of more than 50% in the equal error rate can be achieved. The second finding is that when using this two-level user-specific fusion strategy, the design of the final classifier is simplified and performance generalization of baseline classifiers is not straightforward. A great attention must be given to the choice of the combination normalization-back-end classifier.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

July-2012

A New Framework based on Signature Patches, Micro Registration, and Sparse Representation for Optical Text Recognition

Farrahi Moghaddam R , Cheriet M
Conference PapersPublished - Proceedings, 1259 - 1265 11th International Conference on Information Sciences, Signal Processing and their Applications: Special Sessions (ISSPA2012), , Montreal, 2012-07-02
Abstract
A framework for development of segmentation-free optical recognizers of ancient manuscripts, which work free from line, word, and character segmentation, is proposed. The framework introduces a new representation of visual text using the concept of signature patches. These patches which are free from traditional guidelines of text, such as the baseline, are registered to each other using a microscale registration method based on the estimation of the active regions using a multilevel classifier, the directional map. Then, an one-dimensional feature vector is extracted from the registered signature patches, named spiral features. The incremental learning process is performed using a sparse representation using a dictionary of spiral feature atoms. The framework is applied to the George Washington database with promising results.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

July-2012

Hyperspectral Band Selection Based on Graph Clustering

Hedjam R , Cheriet M
Conference PapersPublished - Proceedings, 813 - 817 11th International Conference on Information Sciences, Signal Processing and their Applications: Special Sessions (ISSPA’2012), , Montreal, Canada, 2012-07-02
Abstract
In this paper we present a new method for hyperspectral band selection problem. The principle is to create a band adjacency graph (BAG) where the nodes represent the bands and the edges represent the similarity weights between the bands. The Markov Clustering Process (abbreviated MCL process) defines a sequence of stochastic matrices by alternation of two operators on the associated affinity matrix to form distinct clusters of high correlated bands. Each cluster is represented by one band and the representative bands will form the new data cube to be used in subsequent processing. The proposed algorithm is tested on a real dataset and compared against state-of-art. The results are promising.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

July-2012

Carbon Metering and Effective Tax Cost Modeling for Virtual Machines

Farrahi Moghaddam F, Farrahi Moghaddam R , Cheriet M
Conference PapersPublished - Proceedings, 758 - 763 IEEE Fifth International Conference on Cloud Computing (CLOUD'12), , 2012-06-24
Abstract
With raising concerns about global warming and environmental impacts of Greenhouse Gases (GhGs) emissions, energy efficiency and carbon footprint reduction attracted many researchers to provide efficient models and tools for energy, carbon, and cost estimation and management. In this paper, a model for measuring the energy consumption and carbon footprint of an individual virtual machine is presented based on resource usage and performance monitoring counters. A simple cost model is represented in order to evaluate the energy consumption and carbon footprint models. The model evaluated on a simulated virtual private cloud with different methodologies such as server consolidation and multi-level grouping heuristic algorithms. The results show that such heuristic algorithms are able to significantly reduce the cost of energy and carbon footprint of an individual virtual machine in comparison with other methodologies such as server consolidation. The results also show that this cost reduction efficiency is positively correlated to the increase in carbon footprint tax rates
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -CRD-Ericsson

April-2012

Multilevel Grouping Genetic Algorithm for Low Carbon Virtual Private Clouds

Farrahi Moghaddam F, Farrahi Moghaddam R , Cheriet M
Conference PapersPublished - Proceedings, 315-324 Cloud Computing and Services Science (CLOSER), 2012, , 2012-04-18
Abstract
Optimization problem of physical servers consolidation is very important for energy efficiency and cost reduction of data centers. For this type of problems, which can be considered as bin-packing problems, traditional heuristic algorithms such as Genetic Algorithm (GA) are not suitable. Therefore, other heuristic algorithms are proposed instead, such as Grouping Genetic Algorithm (GGA), which are able to preserve the group features of the problem. Although GGA have achieved good results on server consolidation in a given data center, they are weak in optimization of a network of data centers. In this paper, a new grouping genetic algorithm is introduced which is called Multi-Level Grouping Genetic Algorithm (MLGGA), and is designed for multi-level bin packing problems such as optimization of a network of data centers for carbon footprint reduction, energy efficiency, and operation cost reduction. The new MLGGA algorithm is tested on a real world problem in a simulation platform, and its results are compared with the GGA results. The comparison shows a significant increase in the performance achieved by the proposed MLGGA algorithm.
Funding Resources

CANARIE - GreenStar Network

September-2011

TSV-LR: Topological signature vector-based lexicon reduction for fast recognition of pre-modern Arabic subwords

Chherawala Y, Wisnowsky R , Cheriet M
Conference PapersPublished - Proceedings, 6-13 2011 Workshop on Historical Document Imaging and Processing (HIP '11), , China, 2011-09-16
Abstract
Automatic recognition of Arabic words is a challenging task and its complexity increases as the lexicon grows. In premodern documents, the vocabulary is unconstrained; therefore a lexicon-reduction strategy is needed to reduce the recognition computational complexity. This paper proposes a novel lexicon-reduction method for Arabic subwords based on their shapes’ topology and geometry. First the subword shape’s topological and geometrical information is extracted from its skeleton and encoded into a graph. Then the graph is converted into a topological signature vector (TSV) which preserves the graph structure. The lexicon is reduced based on the TSV distance between the lexicon subwords’ shapes and a query shape, by keeping the i nearest subwords. The value of i is selected according to a predetermined lexicon-reduction accuracy. The proposed framework has been tested on a database of pre-modern Arabic subword shapes with promising results.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

September-2011

Novel Data Representation for Text Extraction from Multispectral Historical Document Images

Hedjam R , Cheriet M
Conference PapersPublished - Proceedings, 172-176 International Conference on Document Analysis and Recognition (ICDAR), , Beijing, China, 2011-09-18
Abstract
The extraction and analysis of useful information from old document images is very important into cultural heritage preservation. In advanced research, where the goal is to separate the foreground (in general, text) from the background, image restoration and pattern classification techniques are used. Most of these methods consist of classifying the pixels based on their gray-scale value. In this paper, we propose to perform foreground pattern extraction using regions-of-interest (ROI) analysis and a maximum likelihood classifier designed for multispectral document images. As contribution, a new feature vector is proposed to improve discrimination between patterns that is embedded in a simple statistical classification method. The results, which are promising, are compared to the state-of-the-art.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

September-2011

Indexing On-Line Handwritten Texts Using Word Confusion Networks

Saldarriaga S P , Cheriet M
Conference PapersPublished - Proceedings, 197 - 201 International Conference on Document Analysis and Recognition, , Beijing, China, 2011-09-18
Abstract
In the context of handwriting recognition, word confusion networks (WCN) are convenient representations of alternative recognition candidates. They provide alignment for mutually exclusive words along with the posterior probability of each word. In this paper, we present a method for indexing on-line handwriting based on WCN. The proposed method exploits the information provided by WCN in order to enhance relevant keyword extraction. In addition, querying the index for a given keyword has worst case complexity O(log n), as compared to usual keyword spotting algorithms which run in O(n). Experiments show promising results in keyword retrieval effectiveness by using WCN when compared to keyword search over 1-best recognition results.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

September-2011

Combining statistical and geometrical classifiers for text extraction in multispectral document images

Hedjam R , Cheriet M
Conference PapersPublished - Proceedings, 98-105 2011 Workshop on Historical Document Imaging and Processing (HIP '11), , China, 2011-09-16
Abstract
Extraction of the original text from historical document images is very important in the preservation of cultural heritage. In recent decades, many image processing techniques have been developed to separate the main text from the document image background, most of which are based on grayscale treatment. In this paper, we propose a new text extraction method designed for multi-spectral document images (MSDI), based on a combination of two classifiers, one statistical and the other geometric. Our main contribution is the novel technique involving feature extraction and classifier weighting in the context of MSDI. The results, which are compared to two binarization methods, are promising.
Funding Resources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

July-2011

Low Carbon Virtual Private Clouds

Farrahi Moghaddam F , Cheriet M, Nguyen K K
Conference PapersPublished - Proceedings, 259-266 2011 IEEE International Conference on Cloud Computing (CLOUD’11), , 2011-07-04
Abstract
Data center energy efficiency and carbon footprint reduction have attracted a great deal of attention across the world for some years now, and recently more than ever. Live Virtual Machine (VM) migration is a prominent solution for achieving server consolidation in Local Area Network (LAN) environments. With the introduction of live Wide Area Network (WAN) VM migration, however, the challenge of energy efficiency extends from a single data center to a network of data centers. In this paper, intelligent live migration of VMs within a WAN is used as a reallocation tool to minimize the overall carbon footprint of the network. We provide a formulation to calculate carbon footprint and energy consumption for the whole network and its components, which will be helpful for customers of a provider of cleaner energy cloud services. Simulation results show that using the proposed Genetic Algorithm (GA)-based method for live VM migration can significantly reduce the carbon footprint of a cloud network compared to the consolidation of individual data center servers. In addition, the WAN data center consolidation results show that an optimum solution for carbon reduction is not necessarily optimal for energy consumption, and vice versa. Also, the simulation platform was tested under heavy and light VM loads, the results showing the levels of improvement in carbon reduction under different loads.
Funding Resources

CANARIE - GreenStar Network

May-2011

Compact support kernels based time-frequency distributions: Performance evaluation

Abed M, Belouchrani A , Cheriet M , Boashash B
Conference PapersPublished - Proceedings, 4180- 4183 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), , 2011-05-22
Abstract
This paper presents two new time-frequency distributions based on kernels with compact support (KCS) namely the separable CB (SCB) and the polynomial CB (PCB) TFDs. The implementation of these distributions follows the method developed for the Cheriet-Belouchrani CB TFD. The performance of this family of TFDs is compared to the most known quadratic distributions through tests on multi-component signals with linear and nonlinear frequency modulations (FMs) considering the noise effects as well. Comparisons are based on the evaluation of an objective criterion namely the Boashash-Sucic's normalized instantaneous resolution performance measure that allows to provide the optimized TFD using a specific methodology. In all presented examples, the KCS TFDs have been shown to have a significant interference mitigation, with the component energy concentration around their respective instantaneous frequency laws being well preserved giving high resolution measure values.
Funding Sources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

November-2010

Personalizable Penbased Interface using Lifelong Learning

Almaksour A, Anquetil E, Quiniou S , Cheriet M , Boashash B
Conference PapersPublished - Proceedings, 188 - 193 2010 IAPR Intl. Conference ICFHR, , Kolkata, India, 2010-11-16
Abstract
In this paper, we present a new method to design customizable self-evolving fuzzy rule-based classifiers. The presented approach combines an incremental clustering algorithm with a fuzzy adaptation method in order to learn and maintain the model. We use this method to build an evolving handwritten gesture recognition system, that can be integrated into an application to provide personalization capabilities. Experiments on an on-line gesture database were performed by considering various user personalization scenarios. The experiments show that the proposed evolving gesture recognition system continuously adapts and evolve according to new data of learned classes, and remains robust when introducing new unseen classes, at any moment during the lifelong learning process.
Funding Sources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

October-2010

Ontology-based Resource Description and Discovery Framework for Low Carbon Grid Networks

Daouadji A, Nguyen K K, Lemay M , Cheriet M
Conference PapersPublished - Proceedings, 477 - 482 IEEE International Conference on Smart Grid Communications 2010, , Maryland, USA, 2010-10-04
Abstract
Using smart grids to build low carbon networks is one of the most challenging topics in ICT (Information and Communication Technologies) industry. One of the first worldwide initiatives is the GreenStar Network, completely powered by renewable energy sources such as solar, wind and hydroelectricity across Canada. Smart grid techniques are deployed to migrate data centers among network nodes according to energy source availabilities, thus CO2 emissions are reduced to minimal. Such flexibility requires a scalable resource management support, which is achieved by virtualization technique. It enables the sharing, aggregation, and dynamic configuration of a large variety of resources. A key challenge in developing such a virtualized management is an efficient resource description and discovery framework, due to a large number of elements and the diversity of architectures and protocols. In addition, dynamic characteristics and different resource description methods must be addressed. In this paper, we present an ontology-based resource description framework, developed particularly for ICT energy management purpose, where the focus is on energy-related semantic of resources and their properties. We propose then a scalable resource discovery method in large and dynamic collections of ICT resources, based on semantics similarity inside a federated index using a Bayesian belief network. The proposed framework allows users to identify the cleanest resource deployments in order to achieve a given task, taking into account the energy source availabilities. Experimental results are shown to compare the proposed framework with a traditional one in terms of GHG emission reductions.
Funding Sources

CANARIE - GreenStar Network

October-2010

Green ICT: the rationale for a focus on curbing greenhouse gas emissions

Despins C, Arnaud B St, Labelle R , Cheriet M
Conference PapersPublished - Proceedings, 1 - 6 The IEEE 2010 International Conference on Wireless Communications and Signal Processing, , Suzhou, China, 2010-10-21
Abstract
Funding Sources

CANARIE - GreenStar Network

October-2010

Le contrôle d'accès dans les environnements fédérés : problématique et approches techniques

Tellier J, Robert J M , Cheriet M
Conference PapersPublished - Proceedings GRES Conference 2010, , Montreal, Canada, 2010-10-13
Abstract
Une organisation virtuelle est formée lorsque des institutions géographiquement distribuées désirent mettre des données ou des unités de calcul en commun afin de résoudre un problème. Étant donné que celles-ci appartiennent à divers domaines administratifs, la gestion des droits d’accès à travers l’organisation virtuelle est une tâche complexe. Plusieurs approches ont été élaborées, mais aucune ne traite l’ensemble des facettes de la problématique. En fait, comme les besoins de tels systèmes peuvent grandement varier en fonction de leurs utilisations, il est impossible de développer une solution unique. Cet article a comme objectif de présenter les environnements fédérés en mettant l’emphase sur la problématique du contrôle d’accès. Il décrit aussi quelques techniques permettant d’aborder ce problème. A virtual organization is formed when geographically distributed institutions want to share data or computational resources in order to accomplish a common goal. Since its members belong to multiple administrative domains, managing access control in a unified manner throughout the virtual organization can be troublesome. Many approaches have been suggested, but none deals with every aspects of the problem. In fact, since the needs of such systems may greatly vary depending on their intended use, it is, therefore, impossible to develop a universal solution. This article aims at introducing federated environments by taking an access control standpoint. It also describes some techniques that can be used to address this problem. Une organisation virtuelle est formée lorsque des institutions géographiquement distribuées désirent mettre des données ou des unités de calcul en commun afin de résoudre un problème. Étant donné que celles-ci appartiennent à divers domaines administratifs, la gestion des droits d’accès à travers l’organisation virtuelle est une tâche complexe. Plusieurs approches ont été élaborées, mais aucune ne traite l’ensemble des facettes de la problématique. En fait, comme les besoins de tels systèmes peuvent grandement varier en fonction de leurs utilisations, il est impossible de développer une solution unique. Cet article a comme objectif de présenter les environnements fédérés en mettant l’emphase sur la problématique du contrôle d’accès. Il décrit aussi quelques techniques permettant d’aborder ce problème.
Funding Sources

CANARIE - GreenStar Network

August-2010

Evolving Fuzzy Classifiers: Application to Incremental Learning Handwritten Gesture Recognition Systems

Almaksour A, Anquetil E,Quiniou S , Cheriet M
Conference PapersPublished - Proceedings, 4056-4059 2010 IAPR Intl. Conference ICPR, , Istanbul, Turkey, 2010-08-23
Abstract
In this paper, we present a new method to design customizable self-evolving fuzzy rule-based classifiers. The presented approach combines an incremental clustering algorithm with a fuzzy adaptation method in order to learn and maintain the model. We use this method to build an evolving handwritten gesture recognition system. The self-adaptive nature of this system allows it to start its learning process with few learning data, to continuously adapt and evolve according to any new data, and to remain robust when introducing a new unseen class at any moment in the life-long learning process.
Funding Sources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

August-2010

Length increasing active contour for the segmentation of small blood vessels

Rivest-Hénault D, Deschenes S, Lapierre C , Cheriet M
Conference PapersPublished - Proceedings, 2796-2799 2010 IAPR Intl. Conference ICPR, , Istanbul, Turkey, 2010-08-23
Abstract
Length increasing active contour for the segmentation of small blood vessels
Funding Sources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

July-2010

Text extraction from degraded document images

Hedjam R, Farrahi Moghaddam R , Cheriet M
Conference PapersPublished - Proceedings, 248-253 The Second European Workshop on Visual Information Processing 2010 EUVIP, , Paris, France, 2010-07-05
Abstract
In this work, a robust segmentation method for text extraction from the historical document images is presented. The method is based on Markovian-Bayesian clustering on local graphs on both pixel and regional scales. It consists of three steps. In the first step, an over-segmented map of the input image is created. The resulting map provides a rich and accurate semi-mosaic fragments. The map is processed in the second step, similar and adjoining sub-regions are merged together to form accurate text shapes. The output of the second step, which contains accurate shapes, is processed in the final step in which, using clustering with fixed number of classes, the segmentation will be obtained. The method employs significantly the local and spatial correlation and coherence on both the image and between the stroke parts, and therefore is very robust with respect to the degradation. The resulting segmented text is smooth, and weak connections and loops are preserved thanks to robust nature of the method. The output can be used in succeeding skeletonization processes which require preservation of the text topology for achieving high performance. The method is tested on real degraded document images with promising results.
Funding Sources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

July-2010

Semi-Supervised Learning for Weighted LS-SVM

Adankon M M , Cheriet M
Conference PapersPublished - Proceedings, 1 - 8 2010 IEEE Intl. Conference IJCNN, , Barcelona, Spain, 2010-07-18
Abstract
The least squares support vector machine (LS-SVM) is an interesting variant of the SVM. It performs structural risk through margin-maximization and has excellent power of generalization. For some applications, it is more interesting to use the weighted LS-SVM where the impact of each training sample is controlled by weighting factors. In this paper, we consider the use of the weighted LS-SVM in semi-supervised learning. We propose an algorithm to perform this type of learning by extending the transductive SVM idea. We tested our algorithm on both artificial and real problems and demonstrate its usefulness comparing with other semi-supervised learning methods.
Funding Sources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

July-2010

Degraded Color Document Image Enhancement Based on NRCIR

Chen S, Beghdadi A , Cheriet M
Conference PapersPublished - Proceedings, 19 - 22 The second European Workshop on Visual Information Processing 2010 EUVIP, , Paris, France, 2010-07-05
Abstract
An automatic algorithm of degraded color document image enhancement is proposed based on our previous work on Natural Rendering of Color Image using Retinex (NRCIR) with respects to document image characteristics. In the proposed work, an adaptive workflow is designed to enhance both luminance and chrominance contrast of document image while maintaining degradations within tolerance and hue-shift minimized. Tests with degraded document image databases are effectuated, and the results show an encouraging performance of the proposed method.
Funding Sources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

July-2010

Segmentation-based Document Image Denoising

Hedjam R, Beghdadi A , Cheriet M
Conference PapersPublished - Proceedings, 61 - 65 The Second European Workshop on Visual Information Processing 2010 EUVIP, France, Paris, France, 2010-07-05
Abstract
In this work, a robust method of document images denoising is presented. The simple idea is combining the NLM filter and a Markovian segmentation into regions. The NLM method filtering allows participation of far, but proper pixels in the denoising process. Although the weights of non-similar (irrelevant) pixels are very small, high number of these pixels results in introduction of blur. In this work we present a new method to select the best candidate pixels based on their similarity. Before performing denoising process, we segment the noisy image into regions where similar pixels belong to a same homogeneous region r. Thus, to denoise a given pixel i, which belong to a region ri, the proposed algorithm looks for the neighbor pixels of i and includes only those belonging to same region ri. This method is tested on real noisy document images with promising results and it presents an improvement comparing to the original NLM.
Funding Sources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

June-2010

IBN SINA: A database for research on processing and understanding of Arabic manuscripts images

Farrahi Moggadham R, , Cheriet M ,Adankon M M, Filonenko K, Wisnovsky R
Conference PapersPublished - Proceedings, 11-18 2010 IAPR Intl. Workshop on Document Analysis Systems, , Boston, MA, USA, 2010-06-09
Abstract
This paper describes the steps that have been undertaken in order to develop the IBN SINA database, which is designed to apply learning techniques in the processing and understanding of document images. The description of the preparation process, including preprocessing, feature extraction and labeling, is provided. The database has been evaluated using classification techniques, such as the SVM classifiers. In order to make the database compatible with these classifiers, the labels of the shapes have been translated into a set of bi-class problems. Promising results with the SVM classifiers have been obtained.
Funding Sources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

April-2010

Decreasing Live Virtual Machine Migration Down-Time Using a Memory Page Selection Based on Memory Change PDF

Farrahi Moggadham R , Cheriet M >
Conference PapersPublished - Proceedings, 355-359 IEEE Conference on Networking, Sensing and Control, , Chicago, USA, 2010-04-11
Abstract
Seamless migration of Virtual Machines (VMs) will guarantee the Service Level Agreements during the live data center migration which is an important step for allocation and management of Information and Communication Technology (ICT) infrastructures and resources across the globe. At our knowledge, only simplistic models have been used to describe the governing process of memory migration. In this paper, first, we formulate a mathematical model for virtual machine memory transfer. Then, we show the limitations of such a transfer. Finally, we introduce and evaluate a more efficient method for memory transfer.
Funding Sources

CANARIE - GreenStar Network

March-2010

Semi-automatic segmentation of major aorta-pulmonary collateral arteries (MAPCAs) for Image Guided Procedures

Rivest-Hénault D, Duong L, Deschenes S, Lapierre C , Cheriet M
Conference PapersPublished - Proceedings 2010 SPIE Medical Imaging, , San Diego, USA, 2010-03-05
Abstract
Funding Sources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

November-2010

Markovian Clustering for the Non-local Means Image Denoising

Hedjam R, Farrahi Moghaddam R , Cheriet M
Conference PapersPublished - Proceedings, 3877-3880 2009 IEEE Intl. Conference ICIP, , Cairo, Egypt, 2009-11-07
Abstract
The non-local means filter is one of powerful denoising methods which allows participation of far, but proper pixels in the denoising process. Although the weights of non-similar pixels are very small, high number of these pixels results in introduction of blur. In this work, we introduce an automatic and robust method to select the best candidate pixels based on their similarity to the target pixel. This method is based on graphs partitioning and uses Markovian clustering on the pixel adjacency graph (PAG). In this way, a set of relevant pixels is obtained that is used in weighted averaging for denoising each pixel. To evaluate the method, denoising of the natural images is conducted, and the results are compared to the standard NLM filter and the SVD-based method. The results are promising.
Funding Sources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

November-2009

Robust Authentication Using Likelihood Ratio based Score Fusion of Voice and face

Bengherabi M, Mezai L, Harizi F, Guessoum A , Cheriet M
Conference PapersPublished - Proceedings, 1-6 2009 IEEE Intl. Conf. ICIP, , 2009-11-07
Abstract
With the increased use of biometrics for identity verification, there have been a similar increase in the use of multimodal fusion to overcome the limitations of unimodal biometric systems. While there are several types of fusion (e.g. decision level, score level, feature level, sensor level), research has shown that score level fusion is the most effective in delivering increased accuracy. Recently a promising framework for optimal combination of match scores based on the Likelihood Ratio-LR-test is proposed where the distributions of genuine and impostor match scores are modelled as finite Gaussian mixture model. In this paper, we examine the performance of combining face and voice biometrics at the score level using the LR classifier. Our experiments on the publicly available scores of the XM2VTS Benchmark database show a consistent improvement in performance compared to the famous efficient sum rule preceded by Min-Max, z-score and tanh score normalization techniques.
Funding Sources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

September-2009

Design of a framework using InkML for pen-based interaction in a collaborative environment Poster

Quiniou S, Anquetil E , Cheriet M
Conference PapersPublished - Proceedings 2009 ACM Intl. Conference on HCI, , Bonn, Germany, 2009-09-15
Abstract
Funding Sources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

July-2009

Restoration and Segmentation of Highly Degraded Characters using a Shape-Independent Level Set Approach and Multi-level Classifiers

Farrahi Moghaddam R, Rivest-Hénault D , Cheriet M
Conference PapersPublished - Proceedings, 828-832 2009 IAPR Intl. Conference ICDAR, , Barcelona, Spain, 2009-07-26
Abstract
Segmentation of ancient documents is challenging. In the worst cases, text characters become fragmented as the results of strong degradation processes. New active contour methods allow to handle difficult cases in a spatially coherent fashion. However, most of those method use a restrictive, a priori shape information that limit their application. In this work, we propose to address this issue by combining two complementary approaches. First, multi-level classifiers, which take advantage of the stroke width a priori information, allow to locate candidate character pixels. Second, a level set active contour scheme is used to identify the boundary of a character. Tests have been conducted on a set of ancient degraded Hebraic character images. Numerical results are promising.
Funding Sources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

July-2009

A Unified Framework Based on the Level Set Approach for Segmentation of Unconstrained Double-Sided Document Images Suffering from Bleed-Through

Farrahi Moghaddam R, Rivest-Hénault D, Yosef I B , Cheriet M
Conference PapersPublished - Proceedings, 441-445 IAPR Intl. Conference ICDAR, , Barcelona, Spain, 2009-07-26
Abstract
A novel method for the segmentation of double-sided ancient document images suffering from bleed-through effect is presented. It takes advantage of the level set framework to provide a completely integrated process for the segmentation of the text along with the removal of the bleed-through interfering patterns. This process is driven by three forces: 1) a binarization force based on an adaptive global threshold is used to identify region of low intensity, 2) a reverse diffusion force allows for the separation of interfering patterns from the true text, and 3) a small regularization force favors smooth boundaries. This integrated method achieves high quality results at reasonable computational cost, and can easily host other concepts to enhance its performance. The method is successfully applied to real and synthesized degraded document images. Also, the registration problem of the double-sided document images is addressed by introducing a level set method; the results are promising.
Funding Sources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

July-2009

Handling Out-of-Vocabulary Words and Recognition Errors based on Word Linguistic Context for Handwritten Sentence Recognition

Quiniou S , Cheriet M Anquetil E
Conference PapersPublished - Proceedings, 466-470 2009 IAPR Intl. Conference ICDAR, , Barcelona, Spain, 2009-07-26
Abstract
In this paper we investigate the use of linguistic information given by language models to deal with word recognition errors on handwritten sentences. We focus especially on errors due to out-of-vocabulary (OOV) words. First, word posterior probabilities are computed and used to detect error hypotheses on output sentences. An SVM classifier allows these errors to be categorized according to defined types. Then, a post-processing step is performed using a language model based on part-of-speech (POS) tags which is combined to the n-gram model previously used. Thus, error hypotheses can be further recognized and POS tags can be assigned to the OOV words. Experiments on on-line handwritten sentences show that the proposed approach allows a significant reduction of the word error rate.
Funding Sources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

July-2009

A New Approach for Skew Correction of Documents Based on Particle Swarm Optimization

Sadri J , Cheriet M
Conference PapersPublished - Proceedings, 1066-1070 2009 IAPR Intl. Conference ICDAR, Spain, Barcelona, Spain, 2009-07-26
Abstract
This paper presents a novel approach for skew correction of documents. Skew correction is modeled as an optimization problem, and for the first time, particle swarm optimization (PSO) is used to solve skew optimization. A new objective function based on local minima and maxima of projection profiles is defined, and PSO is utilized to find the best angle that maximizes differences between values of local minima and maxima. In our approach, local minima and maxima converge to the locations of lines and spaces between lines. Results of our skew correction algorithm are shown on documents written in different scripts such as Latin and Arabic related scripts (e.g. Arabic, Farsi,Urdu,...). Experiments show that our algorithm can handle a wide range of skew angles, also it is robust to gray level and binary images of different scripts.
Funding Sources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

July-2009

Robust Authentication using Likelihood Ratio based Score Fusion of Voice and Face

Bengherabi M, Mezai L, Harizi F, Guessoum A , Cheriet M
Conference PapersPublished - Proceedings, 57-61 2009 SIGMAP, , Milan, Italy, 2009-07-07
Abstract
With the increased use of biometrics for identity verification, there have been a similar increase in the use of multimodal fusion to overcome the limitations of unimodal biometric systems. While there are several types of fusion (e.g. decision level, score level, feature level, sensor level), research has shown that score level fusion is the most effective in delivering increased accuracy. Recently a promising framework for optimal combination of match scores based on the Likelihood Ratio-LR-test is proposed where the distributions of genuine and impostor match scores are modelled as finite Gaussian mixture model. In this paper, we examine the performance of combining face and voice biometrics at the score level using the LR classifier. Our experiments on the publicly available scores of the XM2VTS Benchmark database show a consistent improvement in performance compared to the famous efficient sum rule preceded by Min-Max, z-score and tanh score normalization techniques.
Funding Sources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

July-2009

Application of Multi-Level Classifiers and Clustering for Automatic Word Spotting in Historical Document Images

Farrahi Moghaddam R , Cheriet M
Conference PapersPublished - Proceedings, 511-515 2009 IAPR Intl. Conference ICDAR, , Barcelona, Spain, 2009-07-26
Abstract
A complete system for preprocessing and word spotting of very old historical document images is presented. Document images are processed for extraction of salient information using a word spotting technique which does not need line and word segmentation and is language independent.A multi-class library of connected components of document text is created based on six features. The spotting is performed using Euclidean distance measure enhanced by rotation and dynamic time wrapping transforms. The method is applied to a dataset from Juma Al Majid Center (Dubai)with promising results. A promising performance of the word spotting technique is obtained using an automatic preprocessing stage. In this stage, using content-level classifiers, accurate stroke pixels are extracted in a robust way. The preprocessed document images are also more legible to the end user and are less costly to archive and transfer.
Funding Sources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

June-2009

Help-Training semi-supervised LS-SVM

Adankon M M , Cheriet M
Conference PapersPublished - Proceedings, 49 - 56 2009 IEEE Intl. IJCNN, , Atlanta, Georgia, USA, 2009-06-14
Abstract
Help-training for semi-supervised learning was proposed in our previous work in order to reinforce self-training strategy by using a generative classifier along with the main discriminative classifier. This paper extends the Help-training method to least squares support vector machine (LS-SVM) where labeled and unlabeled data are used for training. Experimental results on both artificial and real problems show its usefulness when comparing with other classical semisupervised methods.
Funding Sources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

August-2008

EFDM: Restoration of Singlesided Low-quality Document Images

Farrahi Moghaddam R , Cheriet M
Conference PapersPublished - Proceedings, 16-21 ICFHR’2008, , Montreal, Canada, 2008-08-18
Abstract
This paper addresses the problem of restoration and enhancement of very old single-sided document images. At first step, a degradation model is developed for the generation of synthesized degraded document images in both double-sided and single-sided formats. Then, we propose a novel method, which is based on the anisotropic diffusion method (ADM), for restoration of the degradations in single-sided document images. Due to local characteristics of ADM, we empower our method with two flow fields to play the role of global classifiers in separating the meaningful pixels. Also, the new method uses an extra diffusion of background information that provides an efficient and accurate restoration of the interference patterns and degraded backgrounds. The performance of the method is tested on both real samples, from Google Book Search dataset and UNESCO’s Memory of the World Programme, and synthesized samples provided by our degradation model. The results are promising.
Funding Sources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

August-2008

Low Quality Image Processing for DIAR. Issues and Directions

Cheriet M , Farrahi Moghaddam R
Conference PapersPublished - Proceedings EUSIPCO’08, , Lausanne, Switzerland, 2008-08-25
Abstract
Issues facing document image analysis and recognition are discussed based on the quality and complexity of images. Special attention is paid to low-quality images of ancient manuscripts. Because of the complex content of this type of document, which usually contains several layers of information in the same scale levels, the definition of degradation must be reconsidered. This opens up new challenges for the modeling of document degradation. Also, discussed is the development of appropriate restoration methods for handling degradation. The advantages of preprocessing a document to remove some of the unwanted layers of information in the document image in order to improve its quality are considered, using currently available or new paradigms.
Funding Sources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

August-2008

Model Selection for LS-SVM: Application to Handwriting Recognition

Adankon M M, Cheriet M
Conference PapersPublished - Proceedings ICFHR’2008, , Montreal, Canada, 2008-08-19
Abstract
The support vector machine (SVM) is a powerful classifier which has been used successfully in many pattern recognition problems. It has also been shown to perform well in the handwriting recognition field. The least squares SVM (LS-SVM), like the SVM, is based on the margin-maximization principle performing structural risk minimization. However, it is easier to train than the SVM, as it requires only the solution to a convex linear problem, and not a quadratic problem as in the SVM. In this paper, we propose to conduct model selection for the LS-SVM using an empirical error criterion. Experiments on handwritten character recognition show the usefulness of this classifier and demonstrate that model selection improves the generalization performance of the LS-SVM.
Funding Sources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

June-2008

DIAR: Advances in Degradation Modeling and Processing

Cheriet M ,Farrahi Moghaddam R
Conference PapersPublished - Proceedings, 1-10 ICIAR 2008, , Povoa de Varzim, Portugal, 2008-06-25
Abstract
State-of-the-art OCR/ICR algorithms and software are the result of large-scale experiments on the accuracy of OCR systems and proper selection of the size and distribution of training sets. The key factor in improving OCR technology is the degradation models. While it is a leading-edge tool for processing conventional printed materials, the degradation model now faces additional challenges as a result of the appearance in recent years of new imaging media, new definitions of text information, and the need to process low quality document images. In addition to discussing these challenges in this paper, we present well-developed degradation models and suggest some directions for further study. Particular attention is paid to restoration and enhancement of degraded single-sided or multi-sided document images which suffer from bleed-through or shadow-through.
Funding Sources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

March-2008

Degradation Modeling and Enhancement of Low Quality documents

Cheriet M ,Farrahi Moghaddam R
Conference PapersPublished - Proceedings WOSPA08, , Sharjah, UAE, 2008-03-18
Abstract
In order to tackle problems such as shadow- through and bleed-through, a novel defect model is developed which generates physically damaged document images. This model addresses physical degradation, such as aging and ink seepage. Based on the diffusive nature of the physical defects, the model is designed using virtual diffusion processes. Then, based on this degradation model, a restoration method is proposed and used to fix the bleed-through effect in double-sided document images using the reverse diffusion process. Subjective and objective evaluations are performed on both the degradation model and the restoration method. The experiments show promising results on both real and generated data.
Funding Sources

Natural Sciences and Engineering Research Council of Canada (NSERC) -Discovery Grant

2015

Multi-stage Defense-aware Security Modules Placement in the Cloud

Jarraya Y., Shameli-Sendi A., Fekih Ahmed M., Pourzandi M.,Cheriet Mohamed
patent Filed February-2015 - International Application No. PCT/IB2015/051315).
2014

Mapping Virtual Network Elements to Physical Resources in a Telco Cloud Environment

Nguyen K.-K., Cheriet Mohamed,Pourzandi M., Lemieux Y.
patent Filed 2014 - US no. P41471 FAM
2013

Multi-tenancy Isolation and Self-Management in the Cloud using Autonomic SDN Architecture

Fekih Ahmed M., Pourzandi M., Talhi CCheriet Mohamed
patent Filed 2013 - Number 7035744, United States, District of Columbia
2006-10-13

System for supporting collaborative work

Cheriet M, A. Belouchrani:
patent Pending - Number 60/851,304 , United States | Number 2,563,866 Canada
2006-04-25

Method and System for Measuring the Energy of a Signal

Cheriet Mohamed, A. Belouchrani:
patent Completed - Number 7035744, United States, District of Columbia