Full Professor
VIT University, School of Computer Science and Engineering
He has authored more than 100 international and national journal papers, book chapters and four books; Fundamental Approach to Discrete Mathematics, Computer Based on Mathematics, Theory of Computation, Rough Set in Knowledge Representation and Granular Computing: On some aspects to his credit. In addition, he has edited seven books to his credit.
Dr. Acharjya has served various organisations in various capacities such as academic guide of IGNOU, ICFAI University; conducting board member of Government (Autonomous) College, Rourkela; founder secretary of OITS, Rourkela chapter, Odisha; member of the board of studies in computer science and engineering, and member of networking national and international relations of the VIT Vellore. He is associate editor of several international journals and is a reviewer of many international journals like Applied Soft Computing, Knowledge based Systems (Elsevier), IEEE Transactions on Fuzzy Systems and International Journal of Artificial Intelligence and Soft Computing. His current research interests include rough sets, formal concept analysis, knowledge representation, data mining, granular computing, genetic algorithm, neural network, bio-inspired computing and business intelligence.
VIT University, School of Computer Science and Engineering
VIT University, School of Computer Science and Engineering
VIT University, School of Computer Science and Engineering
Rourkela Institute of Management Studies, Department of Computer Science
Bartini Science College, Department of Mathematics
R. C. M. Science College, Department of Mathematics
Computer Science
Berhampur University, Brahmapur, India
Computer Science
Utkal University, Bhubaneswar, India
Mathematics
Berhampur University, Brahmapur, India
Applied Mathematics
National Institute of Technology, Rourkela, India
Mathematics
Berhampur University, Brahmapur, India
Received "Lifetime Achievement Award" from NFED, Coimbatore, Tamilnadu, India on 5th September, 2021 for splendid contribution to Teaching and Research.
Institute of Self Reliance, Bhubaneswar, Odisha, India express sincere appreciation "Bharat Vikas Award" to Dr. Debi Prasanna Acharjya for his loyalty, diligence and outstanding performance in the field of Rough Computing.
The International Publishing House, The world's most leading biographical specialists, does here by proclaim that Dr. Debi Prasanna Acharjya has been conferred with "The Best Citizens of India Award", 2015 for his contribution to technical publishing.
Received "Outstanding Educator and Scholar Award" from National Foundation for Entrepreneurship Development, Coimbatore, Tamilnadu, India on 5th September, 2015 (On the eve of Teacher's Day) for splendid contribution to Teaching and Scholarly activities.
Khallikote Sanskrutika Parisad, Brahmapur, Odisha at its seventh annual function confer "Eminent Academician Award" to Dr. Debi Prasanna Acharjya for his outstanding contribution to teaching, research, publication and for his scrupulous dedication to education.
Received "Research Award" from VIT University, Vellore, Tamilnadu, India for publishing research papers in peer reviewed journals, books, and book chapters.
Sambalpur University, Burla, Odisha awarded "Gold Medal" to Debi Prasanna Acharjya for securing First Class First in M. Sc. (Applied Mathematics) from National Institute of Technology, Rourkela, India.
Deriving knowledge from huge is a great challenge today. Many new mathematical modeling tools such as fuzzy set, rough set and soft set are emerging to the thrust of the real world task. Development of these techniques and tools and its popularity are studied under different domains like knowledge discovery in database, computational intelligence, knowledge engineering, granular computing etc. The basic idea of rough set is based upon the approximation of sets by lower and the upper approximation. Here, these approximations are based on equivalence relation. However, the requirement of equivalence relation is a restrictive condition that may limit the application of the rough set model. Therefore, the basic notion of rough set is generalized in many ways. For instance, the equivalence relation is generalized to binary relations, neighborhood systems, coverings, Boolean algebras. Further, the indiscernibility relation is generalized to almost indiscernibility relation with the introduction of fuzzy proximity relation. It further extended to intuitionistic fuzzy proximity relation and the concept of rough set on fuzzy approximation spaces and rough set on intuitionistic fuzzy approximation space is studied. In addition, rough set is hybridized with formal concept analysis to study various real life applications.
Further rough set is generalized to rough set on two universal sets with generalized approximation spaces. Further study in the same direction is carried out. We have defined topological characterization of rough set on two universal sets. Also, we establish necessary and sufficient type theorems and show how the results of Busse can be derived from them. Measures of uncertainty and rough equality of sets on two universal sets are also studied. Further rough set on two universal sets is generalized to intuitionistic fuzzy rough set on two universal sets and its properties that are important in the context of knowledge representation is studied. Also, rough computing is hybridized with Bayesian classification to study missing associations of attribute values. In addition to this, application of rough computing in trust management of MANET is also carried out. Further, multi-granular rough set on two universal sets is defined and its various properties are studied.
Main objective of our lab is to analyse huge data and to build software components of agents that are capable of reasoning and acting in changing environment based on mathematical models. We are especially interested in logical agents, and its various applications in web services, bioinspired computing, intrusion detection, and image processing.
Information technology revolution has changed the life style of a common man. Data are gathered at high speed due to various real life applications. Retrieving knowledge from it is of great challenge. Data generated from real life applications like medical, business, economics contain uncertainties. So, uncertainties have to be considered to accomplish a healthier system. But, some sensitive information present in the information system has to be conserved. All these direct us to the theory of privacy preserving data mining (PPDM). From banking to health care, individual person’s data are dealt which may be a serious concern when they are exposed. Nevertheless, knowledge has to be mined from these data. PPDM maintains the equilibrium between research and disclosure. The primary objective of my research work is to design models using intelligent techniques to prevent identity, attribute and inference disclosures from the information system.
Information technology development has brought a revolutionary change in the way data are collected or generated for the ease of decision making. Though enormous amount of data is available in present era, knowledge extracted from it is very minimal. Additionally, these data may be of structured or unstructured. Rather than creating knowledge from domain experts, knowledge acquired from data mining techniques proved to be proficient and well-organized. Data mining can be seen as a type of decision support system. The objective of this research work is to analyze uncertain information in the study of decision making using soft computing techniques. The primary objective of his research work is to design intelligent models which will help the knowledge workers to a more intuitive, practical and effective use of knowledge and information in problem solving, planning and decision making.
The emergence of computer has resulted in enormous growth of data in recent years. But, these data are of no use until it provides some useful information. Though extracting and analyzing data is of great challenge, but the extracted information may not of user’s disinterest. So, identifying the data patterns from a huge amount of data is necessary. The available techniques fail to deal with uncertainty. Uncertain data produces result which may not be efficient and leads to wrong prediction. So, uncertainty is to be taken care to have a healthier prediction discovery system. The research work recorded by her is an attempt to construct intelligent prediction system for real-life applications using rough computing and neural network.
The significance of intrusion detection basically deals with security threats to a computer or network. The illegal accessing of computer or network by an adversary in Internet or network causes an intrusion. Correspondingly the legitimate users on the computer or network try to gain access of extra rights for which they are not approved causes an intrusion. The existing techniques of wireless network intrusion detection are not so effective to identify intrusion with high competence. The main objective of this research is to detect distinct types of intrusion particularly in wireless network using various recent soft computing techniques such as dominance based rough set, multigranulation rough set, formal concept analysis etc.
Medical imaging is used to analyze many diseases in our day-to-day life. But, these images contain uncertainties due to various factors and thus intelligent techniques are essential to process these uncertainties. How, various soft computing techniques are used to process these uncertainties in classification, segmentation, and feature extraction are studied. The experimental results shows the viability of the proposed research.
The research work started with rough set and binary coded genetic algorithm hybridization for obtaining higher accuracy in prediction. But, it has a limitation that the search space is large when the information system contains many parameters. It help us to further hybridize rough set with real coded genetic algorithm. In fact, the hybridization reduced the search space, but the accuracy is dropped. To substantiate further, fuzzy rough set is hybridized with real coded genetic algorithm. This hybridization is carried out due to the limitation of the rough set that generates many rules and to refine rules for getting better accuracy.
Study of behavioural intention in any context is a critical issue. It consists of many uncertainties as the survey deals with humans and their behaviour. In this research work study of nurses towards information technology usage and their attitude towards computerization is taken as a prime objective. Studying the behaviour of nurses towards automation, data are collected from hospitals of Tamilnadu, India. The collected data are analyzed using statistical and rough computing techniques. Initially, descriptive analysis such as correlation, analysis of variance, the pie chart is considered for identifying the factors affecting computerization. Besides, the data are also analyzed for studying nurses attitude towards computerization and information technology expansion.
The Globalization, integration and collaboration among the nations has breakthrough a burst of information and communication technologies. Now the information system has expanded to global information system where the people are in need of integrated system which leads to the advancement of technologies. These embedded technologies had upgraded into the door steps of each person creating an ambient environment with multi-disciplinary interaction technique. In order to understand and study the adoption of these technologies appropriate intelligence has to be enforced to obtain the precise decision making and knowledge acquisition. This leads to the ontogenesis of ambient intelligence. When a real time environment is examined, a large number of new technologies like sensor grid generate uncertain and imprecise data. These uncertainty and imprecision becomes ubiquitous which makes the system complex and it has to be addressed through accurate techniques. Nevertheless there are many theories, models and techniques practicable to reinforce the knowledge acquisition, but these techniques are not quite suitable to study this real-time uncertain data with higher accuracy. In this research the current issue is addressed through the rough set, fuzzy rough set, multigranular rough set and SEM-PLS technique.
This research work mainly focusses on the hybridization of bioinspired computing with a rough set towards knowledge inferencing. In the first phase of hybridization, bioinspired computing is employed to identify the main features of the information system. In the second phase, these features are analyzed in getting knowledge from the information system using the rough set. To initiate research work, cuckoo search optimization is hybridized with a rough set, and its evaluation is carried out over the heart disease information system. In further research, the firefly optimization algorithm is hybridized with the rough set, and it is evaluated over the chronic kidney disease information system. Subsequently, the artificial bee colony is hybridized with a rough set, and its evaluation is done over the hepatitis-B information system. Similarly, bat algorithm is hybridized with rough set and studied over chronic kidney diseases. Finally, all the models are compared.
The research related to the performance evaluation of electro discharge machining (EDM) by using neural computing techniques has attained a significant application in modern manufacturing science. The industry 4.0 revolutions emphasize improving manufacturing efficiency by applying machine learning in production system. The prediction of process parameters by fuzzy graph hybridization of an ANN help to better machining operation and quality of the product. These enhance the overall performance of EDM and minimized the overall manufacturing cost of production.
This research work is an attempt to develop decision support systems using rough set and bio-inspired computing for diagnosis of diseases. This research work integrates artificial fish swarm algorithm and rough set for the diagnosis of hepatitis-B disease. Similarly, whale optimization rough set algorithm hybridization is proposed for the analysis of the mental health condition of Indian people during COVID-19. Furthermore, an integrated shuffled frog leaping algorithm with rough set for decision support system is proposed for analysis of lung cancer diagnosis.
...
A systematic detail study and investigation of a topic and to discover new facts or to do the existing facts differently is known as research. The basic research deals with increasing scientific knowledge whereas applied research use basic research to solve a real world problem. The selected publications that are resulted from my research and my doctoral team members are listed here.
Manufacturing of certain products depends on the designing technology behind it and it depends on the process parameters. The selection of these parameters plays a major role to produce a better product. In general, these process parameters are modeled using heuristic methods. Much research has been carried out in radial overcut prediction using an artificial neural network. But a fuzzy neural network takes the advantages of both fuzzy systems and artificial neural networks. In such cases, the artificial neural network learns from the fuzzy system. In this paper fuzzy graph neural network architecture is used for modeling the process parameters of a system by exploiting approximation methods from artificial neural networks. The proposed technique is analyzed over electro-discharge machining information system for predicting radial overcut. The results obtained are compared with the prediction due to the artificial neural networks and found better.
The growth of information and communication technology makes people neglect their cultural heritage due to various factors, and it leads to a lack of cultural heritage transformation from one generation to the next generation. It greatly impacts on pilgrimage attitude towards cultural heritage. Besides, the expansion of heritage places improves the economic worth of any nation. Further, pilgrimage attraction is a major concern, which in turn improves the business opportunities. In general, cultural heritage depends on the historical, aesthetic, and architectural value of a particular place. Apart from these factors, some other parameters are also associated with cultural heritage. Therefore, it is significant to understand the behavioral pattern of the pilgrimage and their likeliness. This paper makes a phenomenological approach to uncover subliminal values associated with the cultural heritage places of Odisha, India. The prime objective is to study the attitude of pilgrimage towards visiting cultural heritage places. The attitude of pilgrimage depends on different dimensions, such as historical, aesthetic, architectural, spiritual, environment, economic, and management. Looking into uncertainty and frequent changes in human behavior, we employ variance-based structural equation modeling using partial least square and rough set for analyzing the information system. Variance-based structural equation modeling using partial least square help us to identify the factors that are essential for the study, and then the rough set is used to generate the rules. It, in turn, study the attitude of pilgrimage towards cultural heritage place of Odisha.
The change in living standard made people to think on their physical health. Accordingly, healthcare organizations are concentrating more on physical health of people in terms of disease diagnosis and patient care. Digitization is a step towards this end. Nevertheless, digitization generates a voluminous of data every second. Besides, these data contain uncertainties and may be imprecise. Analyzing such uncertainties and impreciseness in an information system is a critical task. Computational intelligence techniques are developed to handle such cases. These techniques include fuzzy set, rough set, soft set, neutrosophic set, bio-inspired, nature-inspired, and evolutionary computing. This research paper presents an extensive review of healthcare that has been carried out by researchers using rough and bio-inspired computing. The purpose of this review is to provide an understanding of prevailing research and relevant information in disease diagnosis concerning rough set and bio-inspired computing. Besides, the application and future scope of research are also presented.
Numerous graph-based clustering algorithms relying on k-nearest neighbor (KNN) have been proposed. However, the performance of these algorithms tends to be affected by many factors such as cluster shape, cluster density and outliers. To address these issues, we present a split–merge clustering algorithm based on the KNN graph (SMKNN), which is based on the idea that two adjacent clusters can be merged if the data points located in the connection layers of the two clusters tend to be consistent in distribution. In Stage 1, a KNN graph is constructed. In Stage 2, the subgraphs are obtained by removing the pivot points from the KNN graph, in which the pivot points are determined by the size of local distance ratio of data points. In Stage 3, the adjacent cluster pairs satisfying the maximum similarity are merged, in which the similarity measure of two clusters is designed with two concepts including external connection edges and internal connection edges. By the experiments on ten synthetic data sets and eight real data sets, we compared SMKNN with two traditional algorithms, two density-based algorithms, nine graph-based algorithms and four neural network based algorithms in accuracy. The experimental results demonstrate a good performance of the proposed clustering method.
The usage of various software applications has grown tremendously due to the onset of Industry 4.0, giving rise to the accumulation of all forms of data. The scientific, biological, and social media text collections demand efficient machine learning methods for data interpretability, which organizations need in decision-making of all sorts. The topic models can be applied in text mining of biomedical articles, scientific articles, Twitter data, and blog posts. This paper analyzes and provides a comparison of the performance of Latent Dirichlet Allocation (LDA), Dynamic Topic Model (DTM), and Embedded Topic Model (ETM) techniques. An incremental topic model with word embedding (ITMWE) is proposed that processes large text data in an incremental environment and extracts latent topics that best describe the document collections. Experiments in both offline and online settings on large real-world document collections such as CORD-19, NIPS papers, and Tweet datasets show that, while LDA and DTM is a good model for discovering word-level topics, ITMWE discovers better document-level topic groups more efficiently in a dynamic environment, which is crucial in text mining applications.
Topic models are efficient in extracting central themes from large-scale document collection and it is an active research area. The state-of-the-art techniques like Latent Dirichlet Allocation, Correlated Topic Model (CTM), Hierarchical Dirichlet Process (HDP), Dirichlet Multinomial Regression (DMR) and Hierarchical Pachinko Allocation (HPA) model is considered for comparison. . The abstracts of articles were collected between different periods from PUBMED library by keywords adolescence substance use and depression. A lot of research has happened in this area and thousands of articles are available on PubMed in this area. This collection is huge and so extracting information is very time-consuming. To fit the topic models this extracted text data is used and fitted models were evaluated using both likelihood and non-likelihood measures. The topic models are compared using the evaluation parameters like log-likelihood and perplexity. To evaluate the quality of topics topic coherence measures has been used.
Digitalization accumulates data in a short period. Smart agriculture for crop identification for cultivation is a common problem in agriculture for agronomists. The generated data due to digitalization does not provide any useful information unless some meaningful information is retrieved from it. Therefore from the existing information system, prediction of decision for unseen associations of attribute values is of challenging. This paper presents a model that hybridizes a fuzzy rough set, real-coded genetic algorithm, and linear regression. The model works in two phases. In the initial phase, the fuzzy rough set is used to remove superfluous attributes whereas, in the second phase, a real-coded genetic algorithm is used to predict the decision values of unseen instances by making use of linear regression. The proposed model is analyzed for its viability using agricultural information system obtained from Krishi Vigyan Kendra of Thiruvannamalai district of Tamilnadu, India. Further, the accuracy of the proposed model is compared with existing techniques.
Healthcare data analysis is a primary concern. It leads to multiple levels of knowledge extraction for decision support systems because of the presence of uncertainties. Therefore, this paper integrates the rough set and artificial fish swarm optimization to develop a decision support system that handles uncertainties present in an information system. In the initial stage, the artificial fish swarm—the rough set procedure is implemented in finding vital features. Further, in the second phase, the rough set uses these vital features to develop a decision support system. The above model is analyzed over hepatitis B disease. The proposed model attains an accuracy of 92.4%. Further, the proposed model is compared with the classical rough set, decision tree, and artificial fish swarm-decision tree model. The accuracy obtained is 88.9%, 83.3%, and 90.8%, respectively. The proposed model has a greater accuracy of 3.5% than the rough set model and has a greater accuracy of 9.1% than the decision tree model. Simultaneously, the proposed model has 1.6% greater accuracy than the artificial fish swarm-decision tree model. Therefore, it is believed that the projected decision support system may be used to prevent and detect hepatitis B diseases.
Healthcare informatics data is proliferating, and analyzing this data is a challenging issue as it requires multiple levels of extraction of knowledge for decision making. Knowledge discovery of databases is a solution to this end. Nevertheless, healthcare data contains uncertainties, and so there is a need for computational intelligence techniques to process such uncertainties while considering feature selection, classification, clustering, and decision rule generation. The rough set is a relatively new technique towards decision rule generation without considering any additional information. On the other hand, bio-inspired computing techniques are widely used for optimization and feature selection. Primarily, bio-inspired computing uses a minimum number of features to classify a system. Therefore, the integration of rough set and bio-inspired computing leads to optimal rule generation. Keeping it in mind, in this paper, we integrate a rough set and bat algorithm to foster knowledge. At the initial phase, the bat algorithm is employed to identify the chief features that affect the decisions. Further, decision rules are generated using these selected features. It, in turn, helps to diagnose a disease at an early stage. The objective is not to replace a physician but to give an alternative opinion to the physician. It is believed that the proposed system can be used as a tool for the prevention and detection of malignancy of various communicable and non-communicable diseases. Simultaneously, it paves the way for efficient healthcare informatics. A case study on chronic liver disease is considered for analyzing the proposed model. Further, the obtained results are compared with hybridized decision tree algorithms and found significantly better.
The rapid growth of sustainable computing towards the energy, power and environment seize an immense attention from bigger organizations to an individual life. Besides the world is advancing towards the digital mode and smartwatch is gaining its popularity because of additional importance to improve lifestyle. Moreover, it is not restricted to only time viewer rather paves a way in user’s daily life. Therefore, it is highly cardinal in identifying the factors among consumers influencing the adoption of smartwatch in sustainable wearable computing. Traditional data modelling tools limited to technology acceptance model is used to this end. After all the study deals with user’s behaviour that includes uncertainties and thus studying such problems using computational intelligence techniques is pivotal. In this research work we hybridize rough set, partial least square, and formal concept analysis to study smartwatch users adoption in wearable computing. Initially, the reliability and validity of the proposed model is analysed using structural equation modelling along with partial least square. Further, decision rules are generated using the rough set. Finally, important factors affecting the user’s behavioural adoption towards sustainable wearable computing is discovered using formal concept analysis.
An extensive amount of data are generated from the electronic world each day. Possessing useful knowledge from this data is challenging, and it became a prime area of current research. Much research has been carried out in these directions initiating from statistical techniques to intelligent computing and further to hybridized computing. The foremost objective of this article is making a comparative study between statistical, rough computing, and hybridized computing approaches. Financial bankruptcy dataset of Polish companies is considered for comparative analysis. Results show that rough hybridization of the binarycoded genetic algorithm provides an accuracy of 98.3% and it is better as compared to other descriptive and rough computing techniques.
Transportation problem is a popular branch of Linear Programming Problem in the field of transportation engineering. Over the years, attempts have been made in finding improved approaches to solve the TPs. Recently, in Quddoos et al. (2012), an efficient approach, namely ASM, is proposed for solving crisp TPs. However, it is found that ASM fails to provide better optimal solution in some cases. Therefore, a new and efficient ASM appoach is proposed in this paper to enhance the inherent mechanism of the existing ASM method to solve both crisp TPs and triangular intuitionistic fuzzy transportation problems (TIFTPs). A least-looping stepping-stone method has been employed as one of the key factors to improve the solution quality, which is an improved version of the existing steppingstone method. Unlike stepping stone method, least-looping stepping-stone method only deals with few selected non-basic cells under some prescribed conditions and hence minimizes the computational burden. Therefore, the framework of the proposed method is a combination of ASM and least-looping stepping-stone approach. To validate the performance of LS-ASM, a set of six case studies and a real-world problem have been solved. The statistical results obtained by LS-ASM have been well compared with the existing popular modified distribution method and the original ASM method, as well. The statistical results confirm the superiority of the LS-ASM over other compared algorithms with a less computationl effort.
Vast volumes of raw data are generated from the digital world each day. Acquiring useful information and chief features from this data is challenging, and it has become a prime area of current research. Another crucial area is knowledge inferencing. Much research has been carried out in both directions. Swarm intelligence is used for feature selection whereas for knowledge inferencing either fuzzy or rough computing is widely used. Hybridization of intelligent and swarm intelligence techniques are booming recently. In this research work, the authors hybridize both artificial bee colony and rough set. At the initial phase, they employ an artificial bee colony to find the chief features. Further, these main features are analyzed using rough set generating rules. The proposed model indeed helps to diagnose a disease carefully. An empirical analysis is carried out on hepatitis dataset. In addition, a comparative study is also presented. The analysis shows the viability of the proposed model.
Manufacturing of goods relies on the design methodology and the process parameters. The parameters used in manufacturing process play an important role to build a quality product. Initially heuristic techniques are used for parameter selection. Much research has been conducted to predict the radial overcut using neural networks. Besides, fuzzy neural network gains more popularity due to presence of fuzzyness in machining process. In this paper fuzzy graph recurrent neural network architecture is used for modelling and predicting the radial overcut in electro discharge machining. The proposed model is analysed over the information system obtained from VIT, Vellore, India. Moreover, it is also compared with fuzzy graph neural network and traditional neural network and found to be better in terms of accuracy.
The propagation of 5G, beyond 5G and Internet of Everything (IoE) networks are the key business force for future networks and its various applications. These networks have been constantly under various assaults by means of blocking and tracking information. Therefore, it is essential to develop a real-time recognition system to handle these assaults. But, not sufficient research has been conducted in this area so far. Hence we propose a model to recognize various assaults via online in 5G, beyond 5G and IoE networks using dominance based rough set and formal concept analysis. For analyzing the model, this paper incorporates legal and simulated 5G, beyond 5G and IoE network traffic, along with various types of assaults. The dominance based rough set is used to identify the assaults whereas chief features that are involved in various assaults are identified using formal concept analysis. The results acquired explain the capability of the projected research.
The rapid growth of information and communication technology captured common man and various organizations and influenced each individual’s life, work, and study. It leads to a data explosion. It has no utility without any analysis and leads to many analytical techniques. The prime objective of these techniques is to derive some useful knowledge. However, the transformation of data into knowledge is not easy because of many reasons, such as disorganized, incomplete, uncertainties, etc. Furthermore, analyzing uncertainties present in data is not a straight forward task. Many different models, like fuzzy sets, rough sets, soft sets, neural networks, generalizations, and hybrid models obtained by combining two or more of these models, have been fruitful in representing knowledge. To this end, this paper identifies the conventionally used rough computing techniques and discusses their concepts, developments, abstraction, hybridization, and scope of applications.
Large volumes of raw data are created from the digital world every day. Acquiring useful information from these data is challenging, and it turned into a prime zone of momentum explore. More research is done in this direction. Further, in disease diagnosis, many uncertainties are involved in the information system. To handle such uncertainties, intelligent techniques are employed. In this paper, we present an integrated scheme for heart disease diagnosis. The proposed model integrates cuckoo search and rough set for inferencing decision rules. At the underlying phase, we employ a cuckoo search to discover the main features. Further, these main features are analyzed using rough set generating rules. An empirical analysis is carried out on heart disease. Besides, a comparative study is also presented. The comparative study demonstrates the feasibility of the proposed model.
The retail industry across the world is realizing that delivering high levels of service quality and achieving customer satisfaction is the key for a sustainable competitive advantage. Researchers have found positive relations between retail service quality dimensions and customer satisfaction. Identifying and classifying the retail customers as ‘satisfied’ or ‘dissatisfied’ according to the retail service quality dimensions would be useful to retailers in enabling strategic decision making in a competitive and dynamic environment. Retailers generate and collect a huge amount of customer data on daily transactions, customer-shopping history, goods transportation, consumption patterns, and service records in a relatively short period. The explosive growth of data requires a more efficient way to extract useful knowledge which can help the retailers to make better business decisions and to target customers who might be profitable to them. The concept of data mining has emerged as an effective technique for exploring large amounts of data to discover meaningful patterns and rules in various fields including retail. In this paper, the retail customers are classified into either ‘satisfied’ or ‘dissatisfied’ classes according to the retail service quality dimensions. The research presents a comparative study of popular classification techniques such as decision tree classifier and support vector machine using the R-studio software. The paper uses machine learning algorithms to assess the Indian retail service quality. The results would help the retail organizations to enhance their overall service quality and to target their marketing efforts at the right group of customers.
The growth of new technologies and ambient intelligence is an emerging technology that enhances our life by adding sensors and networks. Ambient technology is a revolution on smart devices that makes human life efficient. Smartwatches are one that provides flexibility in people’s daily lives by adopting sensing and reasoning of their activities and the surrounding environment. Analyzing a customer’s behaviour towards smartwatches that use ambient intelligence is a critical issue. This article analyses the behavioural intention of customer satisfaction towards smartwatch users in an ambient environment with the help of structured equation modeling using partial least squares and fuzzy rough sets. The structural equation modeling is used to check the reliability and validity of the constructs whereas a fuzzy rough set is used for rule generation and studying customer satisfaction. This enhances the personalization of human beings with the assistance of human-computer interaction capabilities of ambient intelligence.
Computational intelligence innovation and the use of computers have changed the entire healthcare delivery system. Nurses are the leading crew of healthcare organization. But, these nurses are either lacking in computer usage or automated analysis generated by computers. Therefore, it motivates to study the use of computers and information technology by nurses in Indian healthcare system. Further, it is essential to identify the chief factors where these nurses are lacking while using computers and information technology. This will help the management to take necessary measure to train them and make the healthcare industry more productive in perception with usage of computer and information technology. To this end, data has collected from nurses in hospitals in the state of Tamilnadu, India. Data collection is not beneficial unless it is analyzed and meaningful information obtained from it. In this paper, we hybridize rough set and formal concept analysis to arrive at chief factors affecting the decisions. Rough set is used to analyze the data and to generate rules. These generated rules further passed into formal concept analysis to identify the chief characteristics affecting the decisions. This in turn help the organization to provide adequate training to the nurses and the healthcare system will move further to the next stage.
Agriculture plays a vital role in Indian economy. On considering the overall geographical space verses population in India, 7% of population is chronicled in Tamilnadu, with 3% of water and 4% of land resources. Thus an automated prediction system becomes essential for predicting the crop based on the nutritional security of the country. In this paper, effort has been made to process the uncertainties by hybridizing rough set on intuitionistic fuzzy approximation space (RSIFAS) and neural network. RSIFAS identifies the almost indiscernibility among the natural resources, and helps in reducing the computational procedure on employing data reduction techniques whereas neural network helps in prediction process. It helps to find the crops that may be cultivated based on the available natural resources. The proposed model is analyzed on data accumulated from Vellore district of Tamilnadu, India and achieved 93.7% of average classification accuracy. The model is compared with earlier models and found 6.9% better accuracy while prediction.
The topology changes randomly and dynamically in a mobile adhoc network (MANET). Composite characteristics of MANET makes it exposed to interior and exterior attacks. Avoidance support techniques like authentication and encryption are appropriate to prevent attacks in MANET. Thus, an authoritative intrusion detection model is required to prevent from attacks. These attacks can be at either of the layers present in the network or can be of general. Many models have been developed to this end for the detection of intrusion and detection. But, these models aim at any one of the layer present in the network. Therefore effort has been made to consider either of the layers for the detection of intrusion and detection. This article uses multigranular rough set (MGRS) for the detection of intrusion and detection in MANET. The advantage of MGRS is that it can aim at either of the layers present in the network simultaneously by using multiple equivalence relations on the universe. The proposed model is compared with many traditional models and attained higher accuracy.
Smart watch is the new generation watches which provide new assistants in the ambient environment. The new hardware technology of smart watch adds up with sensors and internet affords a high human computer interaction all through 24×7 of a person. In order to understand the satisfaction of smart watch users, it is highly necessary to master the technologies behind it. This paper uses the rough set rule generation technique to analyze the customer satisfaction by adopting the various demographics and key features of smart watch there by understanding the importance of ambient technology
We present a machine-learning-based information retrieval system for astronomical observatories that tries to address user-defined queries related to an instrument. In the modern instrumentation scenario where heterogeneous systems and talents are simultaneously at work, the ability to supply people with the right information helps speed up the tasks for detector operation, maintenance, and upgradation. The proposed method analyzes existing documented efforts at the site to intelligently group related information to a query and to present it online to the user. The user in response can probe the suggested content and explore previously developed solutions or probable ways to address the present situation optimally. We demonstrate natural language-processing-backed knowledge rediscovery by making use of the open source logbook data from the Laser Interferometric Gravitational Observatory (LIGO). We implement and test a web application that incorporates the above idea for LIGO Livingston, LIGO Hanford, and Virgo observatories.
Internet made a drastic change in the way data are collected. There has been huge and huge collections of data. All these data serve no purpose unless some useful information is mined from it. Prediction of future instances will be major research problem. In this work, we adapt a rough and real coded genetic algorithm-based prediction system for prediction of future instances. We adapt rough set in this work because of uncertainties present in the data. Additionally, it is used to eliminate the unwanted attributes. Real coded genetic algorithm is used to predict the values for the unknown instances by making use of multiple linear regression. The model is experimented over agriculture data obtained from Tiruvannamalai district of Tamil Nadu. The experimental results show the viability of proposed research.
In Indian economy, agriculture is the prime vocation that avails in the overall development of the country. Tamil Nadu occupies approximately 7% of the nation's population, with 3% of water resources and 4% of land resources at the country level. The crop suitability prediction is of prime importance to enhance the nutritional security to the developing country. Based on several crops grown in a particular place, and the availability of natural resources, one can identify the suitability of crops that can be grown in a particular place. To this end, many mathematical tools were developed, but they failed to include processing of uncertainties present in the accumulated data. Therefore, in this paper an effort has been made to process the uncertainties by hybridizing rough set on fuzzy approximation space and neural network. The rough set on fuzzy approximation space identifies the almost indiscernibility among the natural resources and helps in minimizing the computational procedure on employing data reduction techniques, whereas neural network helps in prediction process. The proposed model is analysed on agriculture data of Vellore District of Tamil Nadu, India, and achieved 93% of classification accuracy in validation. The model is compared with an earlier model and achieved 8% more accuracy while predicting unseen associations.
This article describes how agriculture is the main occupation of India, and how the economy depends on agricultural production. Most of the land in India is dedicated to agriculture and people depend on the production of agricultural products. Therefore, forecasting the accuracy of future events based on extracted patterns plays a vital role in improving agricultural productivity. By considering the availability of micronutrients and macronutrients of the soil and water in a particular place, the growth of a plant is determined. This helps people to determine the crops to be cultivated at a certain place. In this article, the forecasting is carried out using rough sets and genetic algorithms. Rough sets are used to produce the decision rules whereas genetic algorithms are used to refine the rules and improve classification accuracy. Accuracy of the classification rules is analyzed using different selection methods and crossover operators. Results show that genetic algorithms with a roulette wheel selection and single point crossover provides better performance when compared with other existing techniques.
Marketing management employs various tools and techniques, including market research, to perform accurate marketing analysis. Information and communication technology provided a new dimension in marketing research to maximise the revenues and profits of the firm by identifying the chief attributes affecting decisions. In this paper, we present a hybrid approach for attribute selection in marketing based on rough computing and formal concept analysis. Our approach is aimed at handling an information system that contains numerical attribute values that are “almost similar” instead of “exact similar”. To handle such an information system we use two processes—pre-process and post-process. In pre-process, we use rough set on intuitionistic fuzzy approximation space with ordering rules to find knowledge and associations, whereas in post-process we use formal concept analysis to identify the chief attributes affecting decisions.
Information and technology revolution has brought a radical change in the way data are collected. The data collected is of no use unless some useful information is derived from it. Therefore, it is essential to think of some predictive analysis for analyzing data and to get meaningful information. Much research has been carried out in the direction of predictive data analysis starting from statistical techniques to intelligent computing techniques and further to hybridize computing techniques. The prime objective of this paper is to make a comparative analysis between statistical, rough computing, and hybridized techniques. The comparative analysis is carried out over financial bankruptcy data set of Greek industrial bank ETEVA. It is concluded that rough computing techniques provide better accuracy 88.2% as compared to statistical techniques whereas hybridized computing techniques provides still better accuracy 94.1% as compared to rough computing techniques.
The present age of internet and the rising of business have resulted into many folds of increase in the volume of data which are to be used for various applications on a day to day basis. Therefore, it is an obvious challenge to reduce the dataset and find useful information pertaining to the interest of the organisation. But another challenge lies in hiding sensitive information in order to provide privacy. Thus, attribute reduction and privacy preservation are two major challenges in privacy preserving data mining. In this paper, we propose a sensitive rule hiding model to hide sensitive fuzzy association rules. Proposed model uses rough set on intuitionistic fuzzy approximation spaces with ordering to reduce the dataset dimensionality. We use triangular and trapezoidal membership function to get the fuzzified information system. Finally, decreasing the support of right hand side of the rule is used to hide sensitive fuzzy association rules.
In the proposed hybrid possibilistic exponential fuzzy c-mean segmentation approach, exponential FCM intention functions are recalculated and that select data into the clusters. Traditional FCM clustering process cannot handle noise and outliers so we require being added in clusters due to the reasons of common probabilistic constraints which give the total of membership’s degree in every cluster to be 1. We revise possibilistic exponential fuzzy clustering (PEFCM) which hybridize possibilistic method over exponential fuzzy c-mean segmentation and this proposed idea partition the data filters noisy data or detects them as outliers. Our result analysis by PEFCM segmentation attains an average accuracy of 97.4% compared with existing algorithms. It was concluded that the possibilistic exponential fuzzy c-means segmentation algorithm endorsed for additional efficient for accurate detection of breast tumours to assist for the early detection.
Convergence of information and communication on technology changed the life of a common man over the last few years. At the same time it increases threats to a common man. In order to overcome these situations lots of research is carried out in the direction network and internet security. In addition identification of various attacks is of great challenge before taking any security measures. Identification of phishing attacks such as fake e-mails or websites is one such critical attack in internetworking. It is an illegal trick game that collects personal information from a legitimate user. The fake e-mails or website looking like a true one and even in the web pages where users are asked to send their personal information may look like a legitimate. Identification of such websites, e-mails is one of great challenge today. To this end, in this paper, we propose a model for identification of phishing attacks and chief attributes that make an object phishing object using rough set and formal concept analysis. The concept is explained with an illustration and followed by a case study. The results obtained show the viability of the proposed research.
Information and communication technology made shopping more convenient for common man. Additionally, customers compare both online and offline price of a commodity. For this reason, offline shopping markets think of customer satisfaction and try to attract customers by various means. But, prediction of customer’s choice in an information system is a major issue today. Much research is carried out in this direction for single universe. But, in many real life applications it is observed that relation is established between two universes. To this end, in this paper the authors propose a model to identify customer choice of super markets using fuzzy rough set on two universal sets and radial basis function neural network. The authors use fuzzy rough set on two universal sets on sample data to arrive at customer choice of super markets. The information system with customer choice is further trained with radial basis function neural network for identification of customer choice of supermarkets when customer size increases. A real life problem is presented to show the sustainability of the proposed model.
This paper covers the design and construction of a system to provide support to decrease false assumptions in the detection of breast cancer. The main purpose of this work was to avoid the false assumptions in the detection process with cost effective manner. A model of decrease in false assumptions in breast cancer detection system was proposed with three modules: the first was pre-processing the mammogram image which was from removing the irrelevant parts, the second was formation of homogeneous blocks and that have done segmentation of the image and the third module was colour quantization that helped to break the colours among different regions. Our proposed system reduces the false assumptions during the detection of breast cancer.
Diagnosis of cancer is of prime concern in recent years. Medical imaging is used to analyze these diseases. But, these images contain uncertainties due to various factors and thus intelligent techniques are essential to process these uncertainties. This paper hybridizes intuitionistic fuzzy set and rough set in combination with statistical feature extraction techniques. The hybrid scheme starts with image segmentation using intuitionistic fuzzy set to extract the zone of interest and then to enhance the edges surrounding it. Further feature extraction using gray-level co-occurrence matrix is presented. Additionally, rough set is used to engender all minimal reducts and rules. These rules then fed into a classifier to identify different zones of interest and to check whether these points contain decision class value as either cancer or not. The experimental analysis shows the overall accuracy of 98.3% and it is higher than the accuracy achieved by hybridizing fuzzy rough set model.
In this work the comparison of the back propagations training algorithms of neural networks, mainly, Resilient back propagation, Conjugate gradient and Levenberg Marquardt methods are narrated. In this paper, material removal rates are predicted by comparing the effectiveness and efficiency of three training algorithms on the networks. Electrical Discharge Machining (EDM), the most non-traditional manufacturing procedures, is popular, due to its not requiring cutting tools and permits machining of hard, brittle, thin and complex geometry. Thus it is very popular in the field of modern manufacturing industries such as surgical components, nuclear industries aerospace. Based on the study and test results, although the Levenberg Marquardt has been found to be faster and having better performance than other algorithms in training, the Resilient back propagation algorithm has the best accuracy in testing period.
A huge repository of terabytes of data is generated each day from modern information systems and digital technologies such as Internet of Things and cloud computing. Analysis of these massive data requires a lot of efforts at multiple levels to extract knowledge for decision making. Therefore, big data analysis is a current area of research and development. The basic objective of this paper is to explore the potential impact of big data challenges, open research issues, and various tools associated with it. As a result, this article provides a platform to explore big data at numerous stages. Additionally, it opens a new horizon for researchers to develop the solution, based on the challenges and open research issues.
Wireless networks are built upon a shared medium that makes it easy for adversaries to carry out wireless intervene or jamming attacks that effectively cause a denial of service (DoS) of either transmission or reception functionalities. These attacks can easily be accomplished by an adversary, either bypassing MAC-layer protocols or emitting a wireless signal targeted at jamming a particular channel. In this paper, we survey different jamming attacks that may be employed against a wireless network. Additionally, to cope with the problem of jamming, we propose a detection strategy using dominance based rough set. The technique is employed over physical and data link layer parameters. The proposed method is employed over crawdad dataset and achieved accuracy of 98.36%.
Denial-of-service (DoS) attack is aim to block the services of victim system either temporarily or permanently by sending huge amount of garbage traffic data in various types of protocols such as transmission control protocol, user datagram protocol, internet connecting message protocol, and hypertext transfer protocol using single or multiple attacker nodes. Maintenance of uninterrupted service system is technically difficult as well as economically costly. With the invention of new vulnerabilities to system new techniques for determining these vulnerabilities have been implemented. In general, probabilistic packet marking (PPM) and deterministic packet marking (DPM) is used to identify DoS attacks. Later, intelligent decision prototype was proposed. The main advantage is that it can be used with both PPM and DPM. But it is observed that, data available in the wireless network information system contains uncertainties. Therefore, an effort has been made to detect DoS attack using dominance based rough set. The accuracy of the proposed model obtained over the KDD cup dataset is 99.76 and it is higher than the accuracy achieved by resilient back propagation (RBP) model.
Currently, internet is the best tool for distributed computing, which involves spreading of data geographically. But, retrieving information from huge data is critical and has no relevance unless it provides certain information. Prediction of missing associations can be viewed as fundamental problems in machine learning where the main objective is to determine decisions for the missing associations. Mathematical models such as naive Bayes structure, human composed network structure, Bayesian network modelling, etc., were developed to this end. But, it has certain limitations and failed to include uncertainties. Therefore, effort has been made to process inconsistencies in the data with the introduction of rough set theory. This paper uses two processes, pre-process and post-process, to predict the decisions for the missing associations in the attribute values. In preprocess, rough set is used to reduce the dimensionality, whereas neural network is used in postprocess to explore the decision for the missing associations. A real-life example is provided to show the viability of the proposed research.
In modern Heterogeneous 4G wireless networks, the essential criterion is uninterrupted call service and continuity of the call. Handoff is the best technique to achieve continuous call service. In heterogeneous wireless networks the key challenging aspect is continuous call connection among different networks like Wi- i, Wi-Max, WLAN, and CDMA etc.. In this paper various handoff decision algorithms are discussed and their performance is analyzed. A modified algorithm is proposed and the results compared with the performance of other vertical handoff algorithms. The analysis and results show that the modified algorithm gives better results in minimizing the processing delay during the handoff process. The best network among available networks is chosen using the technique of order preference by similarity to ideal solution (TOPSIS) method.
With the rapid growth of networking and Internet-dependent activities in our daily lives, security has become an important issue. In general, security is more important when concerned to web services and on-line transactions. Web services in general has configuration that represents the constraints and capabilities of the security policies at both internal and end node. It defines security index that are acquired, encryption algorithms that are used, and privacy rules that has to be employed at all nodes. Present realizations of Advanced Encryption Standard (AES) on Reduced Instruction Set Computing (RISC) do not support encryption and decryption. In addition, it only allows a specific cipher block size. In order to overcome the limitations, in this paper we propose and analyze a secured system web service which generate dynamic key and support parallel encryption and decryption.
Enterprise Mobility has been increasing the reach over the years. Initially Mobile devices were adopted as consumer devices. However, the enterprises world over have rightly taken the leap and started using the ubiquitous technology for managing its employees as well as to reach out to the customers. While the Mobile ecosystem has been evolving over the years, the increased exposure of mobility in Enterprise framework have caused major focus on the security aspects of it. While a significant focus have been put on network security, this paper discusses on the approach that can be taken at Mobile application layer, which would reduce the risk to the enterprises.
Rough set was conceptualized to deal with indiscernibility or imperfect knowledge about elements in numerous real life scenarios. But it was noticed later that an information system may establish relation with more than one universe. So, rough set on one universal set was further extended to rough set on two universal sets. This paper presents eleven possible types of classifications on the whole and it is proved that out of those eleven types only five types which were hypothesized by Busse (1988) are elementary and the rest six types can be reduced to the elementary five types either directly or transitively. This paper also analyzes to predict the all possible combinations of types of elements for a classification of 2 and 3 numbers of elements. It is established that, the number of classification with 2 elements is 3 whereas with 3 elements is 8 instead of 64.
Convergence of information and communication technology has brought a radical change in the way data are collected or generated for ease of multi criterion decision making. The huge data is of no use unless it provides certain information. It is very tedious to select a best option among an array of alternatives. Also, it becomes more tedious when the data contains uncertainties and objectives of evaluation vary in importance and scope. Unlocking the hidden data is of no use to gain insight into customers, markets and organizations. Therefore, processing these data for obtaining decisions is of great challenge. Based on decision theory, in the past many methods are introduced to solve multi criterion decision making problem. The limitation of these approaches is that, they consider only certain information of the weights and decision values to make decisions. Alternatively, it makes less useful when managing uncertain and vague information. In addition, an information system establishes relation between two universal sets. In such situations, multi criterion decision making is very challenging. Therefore, an effort has been made in this paper to process inconsistencies in data with the introduction of intuitionistic fuzzy rough set theory on two universal sets.
The rough set philosophy is based on the concept that there is some information associated with each object of the universe. The set of all objects of the universe under consideration for particular discussion is considered as a universal set. So, there is a need to classify objects of the universe based on the indiscernibility relation (equivalence relation) among them. In the view of granular computing, rough set model is researched by single granulation. The granulation in general is carried out based on the equivalence relation defined over a universal set. It has been extended to multi-granular rough set model in which the set approximations are defined by using multiple equivalence relations on the universe simultaneously. But, in many real life scenarios, an information system establishes the relation with different universes. This gave the extension of multi-granulation rough set on single universal set to multi-granulation rough set on two universal sets. In this paper, we define multi-granulation rough set for two universal sets U and V. We study the algebraic properties that are interesting in the theory of multi-granular rough sets. This helps in describing and solving real life problems more accurately.
In modern era of computing, there is a need of development in data analysis and decision making. Most of our tools are crisp, deterministic and precise in character. But general real life situations contains uncertainties. To handle such uncertainties many theories are developed such as fuzzy set, rough set, rough set on fuzzy approximation spaces etc. But all these theories have their own limitations. To overcome the limitations, the concept of soft set is introduced. But, soft set also fails if the attributes in the information system are almost identical rather exactly identical. In this paper, we propose a decision making model that consists of two processes such as preprocess and postprocess to mine decisions. In preprocess we use rough set on fuzzy approximation spaces to get the almost equivalence classes whereas in postprocess we use soft set techniques to obtain decisions. The proposed model is tested over an institutional dataset and the results show practical viability of the proposed research.
In this modern era of computing, information technology revolution has brought drastic changes in the way data are collected for knowledge mining. The data thus collected are of value when it provides relevant knowledge pertaining to the interest of an organization. Therefore, the real challenge lies in converting high dimensional data into knowledge and to use it for the development of the organization. The data that is collected are generally released on internet for research purposes after hiding sensitive information in it and therefore, privacy preservation becomes the key factor for any organization to safeguard the internal data and also the sensitive information. Much research has been carried out in this regard and one of the earliest is the removal of identifiers. However, the chances of re-identification are very high. Techniques like k-anonymity and l-diversity helps in making the records unidentifiable in their own way, but, these techniques are not fully shielded against attacks. In order to overcome the drawbacks of these techniques, we have proposed improved versions of anonymization algorithms. The result analysis show that the proposed algorithms are better when compared to existing algorithms.
In the present age of Internet, data is accumulated at a dramatic pace. The accumulated huge data has no relevance, unless it provides certain useful information pertaining to the interest of the organization. But the real challenge lies in hiding sensitive information in order to provide privacy. Therefore, attribute reduction becomes an important aspect for handling such huge database by eliminating superfluous or redundant data to enable a sensitive rule hiding in an efficient manner before it is disclosed to the public. In this paper we propose a privacy preserving model to hide sensitive fuzzy association rules. In our model we use two processes, named a pre-process and post-process to mine fuzzified association rules and to hide sensitive rules. Experimental results demonstrate the viability of the proposed research.
Performance evaluation of various organizations especially educational institutions is a very important area of research and needs to be cultivated more. In this paper, we propose a performance evaluation for educational institutions using rough set on fuzzy approximation spaces with ordering rules and information entropy. In order to measure the performance of educational institutions, we construct an evaluation index system. Rough set on fuzzy approximation spaces with ordering is applied to explore the evaluation index data of each level. Furthermore, the concept of information entropy is used to determine the weighting coefficients of evaluation indexes. Also, we find the most important indexes that influence the weighting coefficients. The proposed approach is validated and shows the practical viability. Moreover, the proposed approach can be applicable to any organizations.
Nodes in a MANET would face difficulties in identifying the trusted nodes for efficient communication with the available information. Establishing a trusted path for efficient communication is highly challenging since the presence of vagueness exists among nodes. Accordingly nodes need a significant trust evaluation process, which would classify the trusted and untrusted nodes in a MANET. In this paper, we propose a trust evaluation model that uses a fuzzy proximity relation with ordering. The fuzzy proximity relation classifies the nodes based on almost similarity using nodes metric. On introducing ordering we identify the trust level of nodes. Then the communication would be established through the trusted nodes. Finally, we have compared our proposed model with AODV. The comparison results show the viability of our proposed model.
The notion of rough set captures indiscernibility of elements in a set. But, in many real life situations, an information system establishes the relation between different universes. This gave the extension of rough set on single universal set to rough set on two universal sets. In this paper, we introduce rough equality of sets on two universal sets and rough inclusion of sets employing the notion of the lower and upper approximation. Also, we establish some basic properties that refer to our knowledge about the universes.
The notion of rough set captures indiscernibility of elements in a set. But, in many real life situations, an information system establishes the relation between different universes. This gave the extension of rough set on single universal set to rough set on two universal sets. In this paper, we introduce approximation of classifications and measures of uncertainty basing upon rough set on two universal sets employing the knowledge due to binary relations.
Medical diagnosis process vary in the degree to which they attempt to deal with different complicating aspects of diagnosis such as relative importance of symptoms, varied symptom pattern and the relation between diseases themselves. Rough set approach has two major advantages over the other methods. First, it can handle different types of data such as categorical, numerical etc. Secondly, it does not make any assumption like probability distribution function in stochastic modeling or membership grade function in fuzzy set theory. It involves pattern recognition through logical computational rules rather than approximating them through smooth mathematical functional forms. In this paper we use rough set theory as a data mining tool to derive useful patterns and rules for kidney cancer faulty diagnosis. In particular, the historical data of twenty five research hospitals and medical college is used for validation and the results show the practical viability of the proposed approach.
Educational quality varies in the degree to which they attempt to deal with different completing aspects of quality such as relative importance of various quality attributes of key process areas. With the emerging of information and communication technology, educational quality has drastically changed. This revolution has brought radical change in the way educational data are generated for ease of decision making. It is well established fact that use of information at the right time provides an advantage to educational quality. But the real challenge lies in converting high dimensional data into knowledge. Though present technology helps in creating databases, but most of the data may not be relevant for formulating a quality educational model. In this paper, we propose a new capability maturity decision making model based on rough computing for extracting key process areas and its relevance for the development of quality education. In particular, 769 educational institutions data are considered for validation and the results shows the practical viability of the proposed model.
Intuitionistic fuzzy approximation space is a generalization of fuzzy approximation space. In this paper, we define intuitionistic fuzzy rough set for two universal sets U and V. We define the concept of solitary set with respect to intuitionistic fuzzy relation from U to V. Further based on solitary set, we study the algebraic properties that are interesting in the theory of rough sets. Further, we present an application of intuitionistic fuzzy rough set on two universal sets model for better knowledge representation.
Information technology revolution has brought a radical change in the way data are collected or generated for ease of decision making. It is generally observed that the data has not been consistently collected. The huge amount of data has no relevance unless it provides certain useful information. Only by unlocking the hidden data we can not use it to gain insight into customers, markets, and even to setup a new business. Therefore, the absence of associations in the attribute values may have information to predict the decision for our own business or to setup a new business. Based on decision theory, in the past many mathematical models such as naïve Bayes structure, human composed network structure, Bayesian network modeling etc. were developed. But, many such models have failed to include important aspects of classification. Therefore, an effort has been made to process inconsistencies in data being considered by Pawlak with the introduction of rough set theory. In this paper, we use two processes such as pre process and post process to predict the output values for the missing associations in the attribute values. In pre process we use rough computing, whereas in post process we use Bayesian classification to explore the output value for the missing associations and to get better knowledge affecting the decision making.
Association rule granulation is a common data mining technique used to extract knowledge from the universe. To characterize the elements of the universe and to extract knowledge about the universe we classify the elements of the universe based on indiscernibility relation. However, in many information systems we find numerical attribute values that are almost similar instead of full identical. To handle such type of information system, we use almost indiscernibility due to rough set on intuitionistic fuzzy approximation space with ordering rules. So, the classification results in a set of classes called granules and are basic building blocks of the knowledge about the universe. Granular computing processes these granules and produces possible patterns and associations. In this paper we study the concept of almost indiscernibility and knowledge granulation along with the association rules using granular computing method. We process the granules obtained as the result of classification and find the association rules between them.
Medical diagnosis process vary in the degree to which they attempt to deal with different complicating aspects of diagnosis such as relative importance of symptoms, varied symptom pattern and the relation between diseases them selves. Based on decision theory, in the past many mathematical models such as crisp set, probability distribution, fuzzy set, intuitionistic fuzzy set were developed to deal with complicating aspects of diagnosis. But, many such models are failed to include important aspects of the expert decisions. Therefore, an effort has been made to process inconsistencies in data being considered by Pawlak with the introduction of rough set theory. Though rough set has major advantages over the other methods, but it generates too many rules that create many difficulties while taking decisions. Therefore, it is essential to minimize the decision rules. In this paper, we use two processes such as pre process and post process to mine suitable rules and to explore the relationship among the attributes. In pre process we use rough set theory to mine suitable rules, whereas in post process we use formal concept analysis from these suitable rules to explore better knowledge and most important factors affecting the decision making.
Emergences of computers and information technological revolution made tremendous changes in the real world and provides a different dimension for the intelligent data analysis. Well formed fact, the information at right time and at right place deploy a better knowledge. However, the challenge arises when larger volume of inconsistent data is given for decision making and knowledge extraction. To handle such imprecise data certain mathematical tools of greater importance has developed by researches in recent past namely fuzzy set, intuitionistic fuzzy set, rough Set, formal concept analysis and ordering rules. It is also observed that many information system contains numerical attribute values and therefore they are almost similar instead of exact similar. To handle such type of information system, in this paper we use two processes such as pre process and post process. In pre process we use rough set on intuitionistic fuzzy approximation space with ordering rules for finding the knowledge whereas in post process we use formal concept analysis to explore better knowledge and vital factors affecting decisions.
Data are being created, collected and accumulated at a dramatic pace, especially at the age of internet. But the transmission of data and information faces significantly more challenges in Mobile Ad-hoc Networks (MANETs) especially for military and vehicular applications. Secure routing is one of the most important issues in MANET. In this paper, we present an optimal and cluster based intra-domain secure routing using elliptic curve cryptography (ECC). The developed cluster-based routing scheme obtains efficient communications among MANET and achieves scalability in large networks by using the clustering technique where packets are routed via cluster-head advertised routes. The cluster can be formed based on type of services or some common tasks. Each cluster has a set of “travailing companions”- these are the nodes that stick together as a group for some common tasks and are authenticated in offline using symmetric key and public key for exchanging of data.
Data are being collected and accumulated at a dramatic pace, especially at the age of internet. Knowledge mining is very hard for human to obtain useful information that is hidden in the accumulated voluminous data. To extract knowledge, many real life problems deal with ordering of objects instead of classifying objects. However, it is not appropriate in case of information table containing attribute values that are not exactly identical but almost identical. This is because objects characterized by the almost same information are almost indiscernible in the view of available information. Therefore, fuzzy proximity relation is suitable in order to process such information table. In this paper, we propose a knowledge mining model in which we combine both classification due to rough set on fuzzy approximation space and ordering of objects for mining knowledge.
The concept of fuzzy approximation space that depends on a fuzzy proximity relation is a generalization of the concept of the knowledge base. But intuitionistic fuzzy approximation space that depends on an intuitionistic fuzzy proximity relation is a better generalization of the concept of knowledge base than fuzzy approximation space. Therefore, rough sets defined on intuitionistic fuzzy approximation spaces extend the concept of rough sets on fuzzy approximation spaces. This paper presents how rough sets on intuitionistic fuzzy approximation spaces provides better result over rough sets on fuzzy approximation spaces on knowledge representation.
Basic rough sets defined by Pawlak have been extended in many directions by different authors to improve its modelling power. In this paper we consider one such extension, rough sets on intuitionistic fuzzy approximation spaces, introduced and studied in [2]. Here we define types of such rough sets and determine the types of intuitionistic fuzzy rough sets obtained as their union and intersection. We illustrate through an example, the knowledge representation capability of this new kind of rough sets and how inferences can be drawn, using analysis techniques.
A fuzzy relation is an extension of crisp relation on any set U. Fuzzy proximity relations on U are much more general and abundant than equivalence relations. The fuzzy approximation space which depends upon a fuzzy proximity relation defined on a set U is a generalisation of the concept of knowledge base. So, rough sets defined on fuzzy approximation spaces extend the concept of rough sets on knowledge bases. The results of the present paper extend the basic properties of rough sets and results involving set theoretic operations on types of rough sets that have been established in Tripathy and Mitra.
Communication is the basic need for any social activity. It may be in business, social activity or merely livelihood. When it comes to communication, information society plays a vital role. The recent development in information and communications technologies has helped everybody to communicate despite of geographical location, infrastructure feasibility or out of reach of the wired network. This paper aims at the basic background on information society paradigm about mobility and total access from any place, at anytime due to rapid computer technology development and the decision making thought, the key for success of Business Intelligence (BI) system that the information obtained due to mobile communication. Fundamental components of BI system namely key information technologies, BI applications that support making different decisions in an organization are discussed. Applications of such kind are rapidly coming into real time and business houses are adapting to this new method of intelligence and communicative business system.
Puneet Kumar, V K Jain, Dharminder Kumar (Editors)
Joseph Tan (Editor)
B. K. Panigrahi, M. Hoda, V. Sharma, S. Goel (Editors)
D. P. Acharjya, V. Santhi (Editors)
D. P. Acharjya, M. Kalaiselvi Geetha (Editors)
Narendra Kumar Kamila (Editor)
Wasan Shaker Awad, El Sayed M. El-Alfy, Yousif Al-Bastaki(Editors)
Noor Zaman, Mohamed Elhassan Seliaman, Mohd Fadzil Hassan, Fausto Pedro Garcia Marquez (Editors)
N. R. Shetty, N. H. Prasad, N. Nalini(Editor)
Muhammad Usman (Editor)
Siddhartha Bhattacharyya and Paramartha Dutta (Editors)
Ajay Verma, Arvind Kumar, M. K. Pradhan (Editors)
Vishal Bhatnagar (Editor)
Biju Issac and Nauman Israr (Editors)
B. K.Tripathy, D. P. Acharjya (Editors)
Ajit Kumar Roy(Editor)
P. Venkata Krishna, M. Rajasekhara Babu, Ezendu Ariwa (Editors)
The recent outbreak of coronavirus disease (COVID-19) has vividly proven to be a global outbreak that has caused millions of lives and left the world in distress. Many research agencies and government agencies are still trying to find the cause and origin. The various measures and policies were taken care of by the government to assure the minimal effect on the spread of the virus. However, a major challenge was to determine the mental status of the patient while he was abandoned by the rest of the world. Further, the basic concern is to relate the trivial features of sudden changes in their daily lives, lockdown rules, and overall economy rate. In this paper, we propose a topic extraction and sentiment framework for Twitter and CORD-19 data to analyze themes across text corpus. The collected tweet dataset is used to gain insights into people’s emotions and how they are responding to measures taken during the pandemic crisis.
The discovery of knowledge from large-scale text data or semi-structured data is very difficult. In text mining, useful information is extracted out of such large text corpus which fulfills a user current information need. This process is being exploited by various organizations for quality improvement, business need, and understanding user behavior. The text available in unstructured and semi-structured form can come through sources such as medical, financial, market, scientific, and others documents. Text mining applies quantitative approach to analyze massive amount of textual data and tries to solve information overload problem. The main objective is to review text mining techniques, application areas, and existing issues.
The prediction of next word, letter or phrase for the user, while she is typing, is a really valuable tool for improving user experience. The users are communicating, writing reviews and expressing their opinion on such platforms frequently and many times while moving. It has become necessary to provide the user with an application that can reduce typing effort and spelling errors when they have limited time. The text data is getting larger in size due to the extensive use of all kinds of social media platforms and so implementation of text prediction application is difficult considering the size of text data to be processed for language modeling. This research paper’s primary objective is processing large text corpus and implementing a probabilistic model like N-grams to predict the next word when the user provides input. In this exploratory research, n-gram models are discussed and evaluated using Good Turing Estimation, perplexity measure and type-to-token ratio.
Manufacturing of goods rely on its design methodology and the process parameters. The parameters used in manufacturing process play an important role to build a quality product. Initially heuristic techniques are used for parameter selection. Many researchers conducted research to predict the radial overcut using neural networks. Besides, fuzzy neural network gains more popularity due to presence of fuzzy system and neural network. In this paper fuzzy graph recurrent neural network architecture is used for modelling and predicting the radial over cut for an electro discharge machining information system.
Internet made a big revolution in the real world and thus poses so many challenges to the researchers by generating an enormous amount of data. The data generated contains an enormous amount of unwanted information. Before processing with such a dataset, the important features present in the dataset must be retrieved. The feature selection process is important because the performance of a model built for the purpose of classification, prediction or clustering depends mainly on the number of relevant features present in the dataset. In this proposed work, the real coded genetic algorithm is used to find the important features by considering the fuzzy rough degree of dependency as its fitness function for finding out optimum features for agricultural dataset, iris dataset and Pima Indian diabetes dataset. The experimental results show that the proposed work produces relevant features by maintaining classification accuracy.
Image processing techniques being crucial towards analyzing and resolving issues in medical imaging since last two decades. Medical imaging is a process or technique to find the inner or outer construction of mortal body. The process observes medicinal diagnosis, analyze illnesses and develop data-sets of normal and abnormal imageries. Medical imaging is divided in two folds such as invisible-light medical imaging and visible-light medical imaging. The second type of medical imaging were can be understood by a common person whereas the first type can be interpreted by a radiologist. Analysis of all these require segmentation and feature extraction. In fact a lot of medical imaging techniques are available but authors restrict survey to tumor detection through mammograms or magnetic resonance imaging. In this paper, authors survey on various segmentation and feature extraction methods in medicinal images used for preprocessing.
Advancing the knowledge and understanding of plants growing around us plays a very important and crucial role in medicinally, economically and in a sustainable agriculture. The identification of plant images in the field of computer vision has become an interdisciplinary focus. Taking benign conditions of quick advancement in computer vision and deep learning algorithms, convolutional neural networks (CNN) approach is taken into consideration to learn feature representation of 185 classes of leaves taken from Columbia University, the University of Maryland, and the Smithsonian Institution. For the large-scale classification of plants in the natural environment, a 50-layer deep residual learning framework consisting of 5 stages is designed. The proposed model achieves a recognition rate of 93.09 percent as the accuracy of testing on the LeafSnap dataset, illustrating that deep learning is a highly promising forestry technology.
The advancements made in cloud applications attracts the healthcare society. The analysis over Electronic Health Record (EHR) gains much importance among the research communities. As EHR contains sensitive information, the need of security and privacy is to be analyzed effectively. The services like outsourcing and increased computation has enhanced the usage of the digital technologies. The digital data interconnects with different network devices which lead to the study of reliability, scalability and security. Security is the major part of the outsourced databases. Prior works like searchable encryption and proxy re-encryption have been introduced by research communities. Still, the security requirements of the healthcare application are not yet achieved. Notwithstanding, prior searching encryption degrades in terms of storage computation. In this paper, we have proposed timing enabled proxy re-encryption system that permits users to access the data under certain time period T. Each identified user is defined with set of attributes and valid time period T. Security analysis in terms of decryption key compromised, Expiration of identity, conjunctive based similarity keyword search and lessened complexity of key update phase have been studied.
There is a partitioning of a data set X into c-clusters in clustering analysis. In 1984, fuzzy c-mean clustering was proposed. Later, fuzzy c-mean was used for the segmentation of medical images. Many researchers work to improve the fuzzy c-mean models. In our paper, we proposed a novel intuitionistic possibilistic fuzzy c-mean algorithm. Possibilistic fuzzy c-mean and intuitionistic fuzzy c-mean are hybridized to overcome the problems of fuzzy c-mean. This proposed clustering approach holds the positive points of possibilistic fuzzy c-mean that will overcome the coincident cluster problem, reduces the noise and brings less sensitivity to an outlier. Another approach of intuitionistic fuzzy c-mean improves the basics of fuzzy c-mean by using intuitionistic fuzzy sets. Our proposed intuitionistic possibilistic fuzzy c-mean technique has been applied to the clustering of the mammogram images for breast cancer detector of abnormal images. The experiments result in high accuracy with clustering and breast cancer detection.
Our research work elaborated in the design and construction of a method that bring sustenance for a reduction in false assumptions during the detection of breast cancer. Our key drive of this research work was to elude the false assumptions in the detection practice in a cost effective manner. We proposed a unique method to decrease false assumption in breast cancer detection cases and split this method in three different modules as preprocessing, formation of homogeneous blocks and color quantization. The preprocessing convoluted in eradicating the extraneous slices. The formation homogeneous blocks sub-method was to do segmentation of the image. The task of the third sub-method (i.e. color quantization) was to break the colors amid different regions.
Graph representations have vast applications and are used for knowledge extraction. With increase in applications of graph, it has become more and more complex and larger in sizes. Visualization and analyzing large community graph are challenging. To study a large community graph, compression technique may be used. There should not be any loss of information or knowledge while compressing the community graph. This paper starts with a formal introduction followed by representing the graph models in compressed form. Greedy Algorithm is used for the purpose. The paper proceeds in the same direction and proposes a similar technique for compressing a large community graph, which is suitable for carrying out steps of graph mining. Observations show that the proposed technique reduces the iteration steps may leads to a better efficiency. Algorithm on the proposed technique has been elaborated followed by a suitable example.
In this paper we have explained the detailed work done in developing a framework which can be used for the purpose of discovering business intelligence with the help of decision rules induced from the customer reviews of a product or a service posted online. We have explained the analysis for the product Samsung Galaxy S5. Our proposed framework has been designed by collecting the reviews from Samsung home page, preprocessing it and inducing rules by using rough set based LEM2 algorithm. The induced rules would be helpful for Business analyst in understanding the product dimensions, attributes and inherent association among them.
In this paper we have explained the detailed work done in developing a system which can be used for the purpose of opinion analysis of a product or a service. The system readily processes the tweets by pulling data from tweeter posts, preprocessing it and connecting to Alchemy API by REST call method. and showing the result graphically. We have given the analysis for the product Samsung Galaxy Our proposed system access the public tweets by API and filters them for Samsung Galaxy. The analysis is being carried out as to classify the sentiment as positive, negative or neutral.
In distributed systems a computer generally process information of distributed application or provide service in distributed system. Therefore, computers connected in distributed system need to keep on all time. It leads to the concept of Wake-on-LAN (Local Area Network). However, keeping on during the idle period is the wastage of power. Therefore, it is essential to save the power while being efficient in network management. In this paper we propose an improved Wake-on-LAN device that incorporates the usage of basic Wake-on-LAN technology into a network management and power saving product.
The fundamental concept of crisp set has been extended in many directions in recent past. The notion of rough set by Pawlak being noteworthy among them. A rough set captures indiscernibility of elements in a set. In the view of granular computing, rough set model is researched by single granulation. It has been extended to multi-granular rough set model in which the set approximations are defined by using multiple equivalence relations on the universe simultaneously. But, in many real life scenarios, an information system establishes the relation with different universes. This gave the extension of multi-granulation rough set on single universal set to multi-granulation rough set on two universal sets. In this paper, we define some algebraic properties and measures of uncertainty of multi-granulation rough set for two universal sets U and V. We study the algebraic properties that are interesting in the theory of multi-granular rough sets. This helps in describing and solving real life problems more accurately.
The notion of rough set captures indiscernibility of elements in a set. But, in many real life situations, an information system establishes the relation between different universes. This gave the extension of rough set on single universal set to rough set on two universal sets. In this paper, we introduce an interesting topological characterization of rough set on two universal sets employing the notion of the lower and upper approximation. Also, we study some basic set theoretic operations on the types of rough sets formed by the topological characterization. In addition to that, we provide a real life example for the depth classification of the concept.
Mining association rules is a common data mining technique to find knowledge about the universe [3]. Due to the lack of sufficient information about the universe we can not uniquely identify each element of the universe. To characterize the elements of the universe and to extract knowledge about the universe we classify the elements of the universe based on indiscernibility relation. The classification results in a set of classes called granules which are basic building blocks of the knowledge about the universe [3, 4]. Granular computing processes these granules and produces possible patterns and associations. In this paper we study the concept of indiscernibility and knowledge granulation. We study the association rules using conventional method and granular computing method. We process the granules produced as the result of classification and produce association rules using granular computing. Finally, we compare both the methods in terms of finding association rules.
Deep learning data analysis is a prime area of research. This book provides recent advances in the fields of deep learning along with theoretical advances and its applications to real-life problems. It offers concepts and techniques of deep learning in a precise and clear manner. More
Smart and intelligent system is a prime area of research. This book presents research works in the field of smart and intelligent systems. It provides original works presented at SIS 2021 held in Andhra Pradesh, India. It serves as a reference for researchers and practitioners in academia and industry. More
In recent years bio-inspired computational theories and tools have developed to assist people in extracting knowledge from high dimensional data. It covers interesting and challenging new theories in image and video processing. It addresses the growing demand for image and video processing in diverse application areas, such as secured biomedical imaging, biometrics, remote sensing, texture understanding, pattern recognition, content-based image retrieval, and more. More
The internet has restructured global interrelations and an unbelievable number of personal characteristics. It leads to Internet of Things. In this book, strong emphasis is given on understanding the technological advancements and its applications. Numbers of applications are used throughout the book for explaining the technology and its usage in real life applications. More
The growing presence of biologically-inspired processing has caused significant changes in data retrieval. With the ubiquity of these technologies, more effective and streamlined data processing techniques are available. Bio-Inspired Computing for Information Retrieval Applications is a key resource on the latest advances and research regarding current techniques that have evolved from biologically-inspired processes and its application to a variety of problems. Highlighting multidisciplinary studies on data processing, swarm-based clustering, and evolutionary computation, this publication is an ideal reference source for researchers, academics, professionals, students, and practitioners.. More
Image and Video Processing is an active area of research due to its potential applications for solving real-world problems. Integrating computational intelligence to analyze and interpret information from image and video technologies is an essential step to processing and applying multimedia data. Emerging Technologies in Intelligent Applications for Image and Video Processing presents the most current research relating to multimedia technologies including video and image restoration and enhancement as well as algorithms used for image and video compression, indexing and retrieval processes, and security concerns. Featuring insight from researchers from around the world, this publication is designed for use by engineers, IT specialists, researchers, and graduate level students. More
The work presented in this book is a combination of theoretical advancements of big data analysis, cloud computing, and their potential applications in scientific computing. The theoretical advancements are supported with illustrative examples and its applications in handling real life problems. The applications are mostly undertaken from real life situations. The book discusses major issues pertaining to big data analysis using computational intelligence techniques and some issues of cloud computing. An elaborate bibliography is provided at the end of each chapter. The material in this book includes concepts, figures, graphs, and tables to guide researchers in the area of big data analysis and cloud computing. More
Technological advancements have extracted a vast amount of useful knowledge and information for applications and services. These developments have evoked intelligent solutions that have been utilized in efforts to secure this data and avoid potential complex problems. Advances in Secure Computing, Internet Services, and Applications presents current research on the applications of computational intelligence in order to focus on the challenge humans face when securing knowledge and data. This book is a vital reference source for researchers, lecturers, professors, students, and developers, who have interest in secure computing and recent advanced in real life applications. More
As the amount of accumulated data across a variety of fields becomes harder to maintain, it is essential for a new generation of computational theories and tools to assist humans in extracting knowledge from this rapidly growing digital data. Global Trends in Intelligent Computing Research and Development brings together recent advances and in depth knowledge in the fields of knowledge representation and computational intelligence. Highlighting the theoretical advances and their applications to real life problems, this book is an essential tool for researchers, lecturers, professors, students, and developers who have seek insight into knowledge representation and real life applications. More
Knowledge representation and granular computing is an active area of current research for their potential application to many real life problems. Therefore, it is challenging for human being in converting huge data into knowledge, and to use this knowledge to make informed decisions properly. It is very difficult to extract expert knowledge from the universe and is an active area of research in artificial intelligence. This involves analysis of how to accurately and effectively use a set of symbols to represent a set of facts within a knowledge domain. The focus of the work in this book is a combination of theoretical advancements of some of the extended models and their applications in knowledge bases. The theoretical advancements have been supported with formal proof to establish soundness whereas the applications are mostly undertaken from real life situations. The book discusses some aspects of rough sets approach in the study of knowledge discovery in databases and granular computing. An elaborate bibliography is provided at the end of the book. The material in this book includes concepts, figures, graphs, and tables to guide researchers in the area of knowledge representation. More
Theory of computation is the scientific discipline concerned with the study of general properties of computation and studies the inherent possibilities and limitations of efficient computation that makes machines more intelligent and enables them to carry out intellectual processes. This book deals with all those concepts by developing the standard mathematical models of computational devices, as well as by investigating the cognitive and generative capabilities of such machines. The book emphasizes on mathematical reasoning and problem-solving techniques that penetrate computer science. Each chapter gives a clear statement of definition and thoroughly discusses the concepts, principles and theorems with illustrative and other descriptive materials. More
This book is a required part of pursuing a Computer Science degree at most universities. It provides in-depth knowledge to the subject for beginners and stimulates further interest in the topic. It includes Strong coverage of key topics involving recurrence relation, combinatorics, Boolean algebra, graph theory and fuzzy set theory. Algorithms and examples integrated throughout the book to bring clarity to the fundamental concepts. Each concept and definition is followed by thoughtful examples. User-friendly and accessible presentation to make learning more interesting as much as possible without compromising mathematical rigour. Additionally around 300 complete solved illustrations are included to explain the concepts and over 300 end-of-chapter exercises are provided to stimulate further interest in the subject. More
The book provides fundamental concepts and methods of Computer Based on Mathematics in a precise and clear manner. It provides the students of Computer Science, Information Technology and Management in depth knowledge in the field of Computer Science and Management. It is also designed to stimulate further interest in the topic. For this, the developments of mathematical concepts are emphasized and progressively more complex material and application are presented. More
Teaching is an art that undertake certain ethical tasks or activities with the intention of learning. Though the exact definition of teaching is not defined, but the mission of teaching and practice can be defined in numerous ways. Being a teacher my primary mission is to make my place of work a center of excellence in R&D as per with international standard and to become the role model in front of others. According to my practice, considering research work as an integral component of teaching and giving equal importance to both teaching and research in day today life is the primary teaching practice.
It provides required theoretical foundation for a computational model. Additionally, Turing machines is discussed as a abstract computational model.
Data structure and algorithm is fundamental course for any scientific computation. How the design of the algorithm can be made so as to get better performance is the prime objective of this subject.
Gave a systematic introduction to the fundamentals and practices of Computational Intelligence, which encompasses Artificial neural networks, Fuzzy logic systems, Evolutionary computing, Swarm intelligence, Neuro-fuzzy and Fuzzy neural systems, hybrid intelligent systems and applications to design, manufacturing and business.
Algorithm is fundamental for any scientific computation. With the availabilities of many processors, how the design of the algorithm can be made so as to get better performance is the prime objective of this subject.
Algorithm design and analysis is a fundamental and important part of computer science. This course introduces students to advanced techniques for the design and analysis of algorithms, and explores a variety of applications.
Introduction to fundamental techniques for designing and analyzing algorithms, including asymptotic analysis; divide-and-conquer algorithms and recurrences; greedy algorithms; data structures; dynamic programming; graph algorithms; and randomized algorithms.
Computer simulations reproduce the behavior of a system using a mathematical model. Computer simulations have become a useful tool for the mathematical modeling of many natural systems in physics, astrophysics, climatology, chemistry and biology, human systems in economics, psychology, social science, and engineering.
Using techniques such as mathematical modeling to analyze complex situations, resource management enables more effective decisions and more productive systems based on robust data, the fuller consideration of available options, and careful predictions of outcomes and estimates of risk.
A photograph speaks much than a literature. Photograph and digital images plays a vital role when we think about our past memories and achievements. The ceremonies of professional awards, important events such as service awards, togetherness, work place, invited talks, PhD awards of doctoral students, and news paper cuttings are all recorded because they matter. Photographs speak our personal story, our lives, faces, and places that we love. We can share these stories with others and in deed provide us happiness. This impulse is presented here and is a powerful force for me to represent myself.
Contact address is of prime importance to contact a person. It provides the means by which one can reach at me. Some addresses include web address, voice call, voice chat, video chat, official address, to make identification easier. Contact details online helps people to contact easily. I would be happy to talk to you if you need my assistance in your research or whether you need bussiness administration support for your company. I devote my limited time for the development of my students.
You can find me at my Work located at Silver Jubilee Tower Annexe, 201 E, VIT University, Vellore, Tamilnadu, India.
I am at my office every day from 9:00 AM until 6:30 PM except holidays, but you may consider a call to fix an appointment.
You can find me at my office located at Silver Jubilee Tower, VIT University, Vellore, Tamilnadu, India.
I am at my office every day from 9:00 AM until 6:30 PM except holidays, but you may consider a call to fix an appointment.
You can find me at my office located at Silver Jubilee Tower, VIT University, Vellore, Tamilnadu, India.
I am at my office every day from 9:00 AM until 6:30 PM except holidays, but you may consider a call to fix an appointment.