Thesis

  1. CLARIMAR JOSE COELHO, 25/02/99
  2. NILTON CORREIA DA SILVA, 09/07/1999
  3. VÂNIA CRISTINA DE ABREU, 13/09/1999
  4. ANDRÉ MACHADO CARICATTI, 14/04/2000
  5. ISABELA N. F. DE QUEIROZ, 22/03/2001
  6. CARLOS ROBERTO PORFÍRIO JUNIOR, 21/06/2001
  7. CANDIDO GUERRERO SALGADO, 10/01/2002
  8. WESLEY MARTINS TELES,18/07/2003
  9. OSMAR QUIRINO DA SILVA. 30/09/2003
  10. SORAIA SILVA PRIETCH, 13/02/04
  11. MARCOS VINICIUS PINHEIRO DIB, 03/09/2004
  12. MANQI WU, 15/04/2005
  13. DANIELA PEREIRA ALVES, 09/11/2006
  14. BUENO BOGES SOUZA, 13/06/2008
  15. SANDRO CARLOS VIEIRA, 21/07/2008
  16. ANTONIO CARLOS DE ARRUDA JUNIOR, 08/12/2009
  17. CICERO ROBERTO FERREIRA DE ALMEIDA, 26/08/2010
  18. ANTONIO CRESPO, 26/10/2010
  19. CHARLES HENRIQUE GONÇALVES SANTOS, 15/08/2011 
  20. ZHANG JIANYA, 09/03/2012
  21. LEONARDO LUIZ BARBOSA VIEIRA CRUCIOL, 08/03/2012

1. CLARIMAR JOSE COELHO, 25/02/99

Identification with Image Analysis in "'' Multiresolution Wavelet Systems Based Reasoning

The representation of a high-quality image is done by a lot of bits. In a photographic quality image, for example, may be required up to 100 million bits. To store, process, analyze and transmit large volumes of images are required high rates of compression. In general, the image processing is not done with the images in its pure state, but is necessary to reduce the large amount of bits that are used for their representation. One of the mathematical processes more successful in this task is to transform "wavelet". Systems Based Reasoning (CBR's) offer great advantages and facilities in various application domains such as planning, design and diagnosis. We will present a Case Based Reasoning system hybrid that combines the advantages of racioínio systems RBC's with the power of mathematical transforms "wavelets" for image processing in an application to rainfall estimation of rainfall. The processed "wavelets" will be used to: eliminate noise, reduce the number of bits and retain the most important physical information of the images.

 

2. NILTON CORREIA DA SILVA, 09/07/1999

Implementation of Self-Parallel networks Srganizing Map to map weather radar images

This innovative work has relevance for the use of neural networks and parallel processing in the classification process of meteorological images (important process in the task of predicting more precipitation). The algorithm presented in this paper uses the benefits that a parallel processing environment has to help in the process of training, testing and use of unsupervised neural networks ("Self-Organizing Maps"). This feature of the learning algorithm results in a single network trained by him. In this work called "Once Learning".

With this type of training can effectively parallelize the process of training, testing and use of neural networks, which means a great saving processing time. This effect is of great importance in the field of Neural Networks as the processing of large numbers of data is a routine task in the subprocess training, testing and utilizaçãodas networks.

This work also contributes directly to the area of image processing weather, because with this novoalgoritmo, neural networks classifier (used in detecting rainfall radar and satellite images) can analyze a much larger number of images (in a unit of time) in a parallel environment.

Through this research, we developed a prototype (in Matlab 5.0) that simulates, in a serial environment, the algorithm Parallel Self-Organizing Map for mapping meteorological images.

 

3. VÂNIA CRISTINA DE ABREU

Developing a Methodology Distribution Travel with Application of Fuzzy Logic

4. ANDRÉ MACHADO CARICATTI

Talkers Recognition in Portuguese with Neural Network Models and Gaussian

In this thesis we study the problem of recognition of speakers, or specifically checking announcers. Seen as a subset of the signal processing, and therefore divided into steps signal acquisition, parameter extraction and classification, working with parameters formed by mel-frequency cepstral coefficients and their rates of change, called coefficients delta-mel- frequency cepstral. Considering the objectives of the study, formed a database from 14 people, providing locutions of the Portuguese language, indoors, though imperfect, is considered noise and echo. Altogether, three systems were implemented, being a dependent text, and two other independent text. In the first, we used multilayer neural networks with learning by back-propagation, calculating correlations between mel-frequency cepstral coefficients, then composing the input values. When performing recognition of text independent broadcasters, neural networks have been applied self-organizing maps (SOM - self-organizing maps), together with calculation of the relative entropy models between talkers and reference talkers unknown. On this occasion, tests were conducted in order to scale the optimal number of neurons for the output layer, reaching 225, and the length of phrases used in training models, getting around 48 seconds to reach the correct recognition in 50% of attempts. Finally, using Gaussian mixture models (GMM - Gaussian Mixture Models) was appreciated, served as compared to the implementation of SOM's, while also obtained estimates of the length of phrases to form models of speakers being sufficient in this case samples approximately 20 seconds to 32 models for speaker mixtures, coming to successful recognition in 64% of experiments

 


5. ISABELA N. F. DE QUEIROZ

Estimation of the saturation flux with the aid of neural networks

6. CARLOS ROBERTO PORFÍRIO JUNIOR

An approach of quantum computing in computational theory and computer simulation of these resources in conventional

Computational resources classics have faced problems of high complexity and sometimes not solvable in acceptable time intervals. Quantum computing, research area of computer science very recent, appears as one or probably the only viable alternative for solving classical algorithms that are theoretically feasible but practically not. This paper aims to summarize the basic concepts of quantum computing, framing it in computational theory, presenting the main theoretical study of algorithms already surveyed the area and mainly display a computer simulation of conventional quantum computational resources in order to provide one academic didactic means to initiate studies in this area, since the quantum phenomena are difficult to be accepted and interpreted by users freshmen in quantum computing, because these phenomena do not result in expected behaviors in its conventional form.

 

7. CANDIDO GUERRERO SALGADO

Behavior of Association Rules and Its Application for Analysis of Medical Data

Data Mining, also known as KDD (Knowledge Discovery in Databases), is the process of extracting information of interest, non-trivial, implicit, previously unknown and potentially useful, from large volumes of data. Mining Association Rules seeking interesting associations between items in a large volume of data. In this work are presented, in an introductory way, some interesting approaches to Data Mining. A comparative analysis of the various approaches is also performed and presented the results of this comparison. The following is the state of the art as regards' the Association Rules, making a presentation of its evolution to the present stage where it is. Finally, we present a case study of application of Association Rules to aid medical diagnosis.

 

8.WESLEY MARTINS TELES

Adaptive system for Web sites based on the behavior of the ant

The ant's behavior can be used as a metaphor to improve the performance of Web type systems With current technology, Internet browsing, the conditions under which the user browses pages looking for their target are very similar to the procedure of ants to find food. But, differently to the procedure of ants, web users do not have pheromone so that they can cooperate in the process of navigation. Based on these observations and the lack of a work that aims to guide the user's web emphasizing route optimization study was carried out in order to fill this gap by applying the ant's behavior we call AntWeb web. This study was divided into two parts: The evaluation of websites for AntWeb AntWeb and adaptive. Regarding AntWeb for evaluating websites was developed an evaluation methodology which was implemented in software that had as input the structure of a website and how to output a measure of performance of the structure. Regarding AntWeb adaptive, two models were developed heuristics to guide the user in a Web context adaptive hypermedia which we call model 1 and model 2. The first model was implemented and tested on the website of the Department of Computer Science, UNB. With the model 2 simulations were made which showed optimistically that the model can assist the user 2 Web The aim of this study is to report the research done with the AntWeb.

 

9. OSMAR QUIRINO DA SILVA. 30/09/2003

Modeling genetic algorithm for optimization, prediction and guidance of road traffic

This study evaluates the applicability of the  

This paper presents research on the problem of predicting the travel time public transport bus urban mass between consecutive stops, aiming to carry out a comparative study between two well-known methodologies, Artificial Neural Networks (ANN) and Model-Box Jenkins. For this, there were a few steps to reach the main focus of the work, among which may be mentioned broadly, the study of the state of the art, which represented a literature search to support the work of the research for field data collection and observation of the bus line chosen in Pilot Plan, in Brasília, Distrito Federal (the choice of a single line was to delimit the object of study). Then the data were modeled to be the need of the software, used later in the implementation phase, in the case of neural networks was used MATLAB, and the Box-Jenkins model, the SAS software. And finally, choose the best model to formalize a solution for the estimation of the arrival time of transport vehicles in urban public bus stops. The results were satisfactory and it has been found that the use of Artificial Neural Network is suitable to solve the prediction problem.

model as Genetic Algorithm technique Artificial Intelligence - AI applied to problems of prediction, optimization and guidance of road traffic. Implements a solution to the problem of choosing the best route traffic artery in the Federal District on a data set that reflects the situation on these routes in real time, based on the flow of Origin / Destination. As a consequence, leads to a dynamic adjustment of the flow of traffic, reaching an equilibrium system. The case study presents three simulations considering actual data such as distance, free stream velocity and obstacles. The experiments demonstrate the efficiency of the classification and generating the results of simulations considering the road network linking urban centers. Because it is a heuristic, the stopping criterion of the algorithm was arbitrated in 5 generations, selecting the best result among the five, in order to avoid that the algorithm presents a solution other than the minimization of the cost function. However, this solution applies to any city or similar problems. This study shows improvements in the solutions of shortest path and dynamic balancing traffic in real time, avoiding excessive points congestionamentos.Palavra Keywords: Optimization Traffic, Genetic Algorithm, Traffic Forecast. Traffic Guidance.

 

10. SORAIA SILVA PRIETCH, 13/02/04

Computational Methods for Forecast Travel Urban Bus Stops between Consecutive

  

 

11. MARCOS VINICIUS PINHEIRO DIB, 03/09/2004

Multi-Agent System using Computational Grid Synchronization and Management of Traffic Flow

This research proposes a Multi-Agent System using Grid Services for Synchronization and Management of Air Traffic Flow (ATFM). The ATFM service is designed to ensure optimum flow of air traffic to, or through, the areas within which the traffic demand at times exceeds the available capacity. The article presents the problem of ATFM and its synchronization property. To demonstrate the developed system are described the architecture of the grid, agents and negotiation mechanisms between them. As an example, it is reported a case study of tactical planning in 4 Brazilian airports. As a criterion for measuring effectiveness in reducing the number of negotiations between the agents and flight delays, is used in this analysis, the Standard Balancing Airport (PBA). The simulation shows the efficiency of the developed model and the success of its application in this case study.

 

12. MANQI WU, 15/04/2005

Cooperative navigation system for websites: approach and case study portal Interlegis

AntWeb system is a navigation aid for Websites and their approach is inspired by the behavior of ant colonies to find food. The system considers Web users as if they were artificial ants. This paper presents an extension to the AntWeb to take into account the complex behavior of human beings during the search process, with the use of techniques of mining Web usage patterns to capture visitors' navigation of the website, making it more appropriate and applicable to the reality of Web users the proposed extended model of AntWeb is to preprocess log files of the Web server, discover categories of users that exhibit similar information needs by providing pages and dynamically adapted with highlighted links that allow visitors to reach their common interests. A prototype of this model was implemented based on real data obtained from Interlegis Portal - Website Community Legislative Brazilian - with the use of free software: Zope (Z Object Publishing Environment), CMF (Content Management Framework) and the database PostgreSQL relational. To demonstrate the feasibility and potential of the proposed solution, three case studies were conducted: the first validates the applicability of Web usage mining in AntWeb, the second validates the extended model of AntWeb as a method of navigational assistance cooperative effective for Websites; and the third evaluation system performance with and without the use of AntWeb. In the case with the AntWeb, tests were made involving variation in parameters that directly or indirectly affect the calculation of the probability. The results show the sensitivity and accessibility of the system as well as benefits to users of the Website Interlegis.

 


13. DANIELA PEREIRA ALVES, 09/11/2006

Modeling Reinforcement Learning and Meta-Level Control for Improving Performance of Communication in Management of Air Traffic

This paper describes a model of intelligent computing that assists decision making in the exchange of messages in a distributed system. The model acts as an additional layer that uses metadata in their decisions. It can be considered innovative because it uses reinforcement learning suited to the characteristics of a stochastic environment, concerned with the speed and quality of decision-making. We propose three strategies for learning: heuristic initial epsilon adaptive and heuristic based on performance. These are combined with reinforcement learning algorithms: Q-learning and SARSA. Case studies evaluate the performance and quality of learning about the strategies proposed.

 

14. BUENO BOGES SOUZA, 13/06/2008

Methodology balancing smart about multi-streams for use in managing air traffic flow

The Balancing Module Flow (MBF) is proposed to support the system in operation in the First Integrated Center for Air Defense and Air Traffic Control (CINDACTA I) and improve the management process applied by the controllers in this center. Using flow maximization techniques adapted from graph theory, the MBF was developed as a model of analysis that determines the separation time between the departures from terminals contained in the Flight Information Region of Brasilia (FIR-BS) and distributes off flow along the controlled airspace. The goal is to prevent or reduce congestion in the various sectors of the FIR-BS.


With the help of other system modules of the MBF supports regulating the flow of traffic controllers in aiding decision making. Through the development of this tool controllers can acquire the knowledge that will help them to make better decisions. The research also presents the results of a simulation with two policies: equal flow distribution and prioritized. As an example it is shown that the separation of the times of departures may be reduced from 30% to 60%, depending on the policy.

 

15 SANDRO CARLOS VIEIRA, 21/07/2008

A model based on artificial intelligence for knowledge management applied to the process of software development

Developing software is a common activity for many companies in various areas of modern society, even if that is not the focus of their business. This reality is present by the realization that most products and services are supported by computing systems, especially when talking about large companies and processes that involve any significance values​​. In the financial sector, for example, have organizations that devote a few thousand professionals to develop systems to automate their activities and the maintenance of others who have been employed for a long time.
In this context, the process of software development have been the subject of constant concern by the government, but most of the work is back in control of resources and the improvement and standardization of current processes, neglecting much of the knowledge that is present in this cycle. This research focuses attention on this issue, proposing a model based on artificial intelligence formalisms, which seeks to add mechanisms for the management of knowledge involved in the process of software development. The model described here has been constructed and applied to a case study in a large financial institution, obtaining promising results regarding its use as a way to increase the reuse of solutions already developed, avoid duplication of efforts in building similar solutions and also provide an effective alternative for the traceability of features and details of such solutions.

 

16 ANTONIO CARLOS DE ARRUDA JUNIOR, 08/12/2009

The solution was modeled as a Decision Support System (Decision Support System - DSS), through the construction of Submodule Modeling and Projection of Impact (MPI), part of Module Evaluation and Decision Support (BHAG), which comprises the modular System integration and Application Management measures to Control Air Traffic Flow (SISCONFLUX). The architecture of this module uses an autonomous agent that, in situations of congestion, provides suggestions for measures restrictive to air traffic controllers, focused on the problem of waiting on the ground (Ground Holding Problem - GHP). The agent acquires knowledge by the environment, through the algorithm QLearning, assessing the situation of the air resulting scenario restrictive measures suggested and restrictive measures. The evaluation of scenarios resulting air is carried through a function that computes data such as congestion level, time delay imposed on the aircraft, and an index of financial impact and equity in the distribution of the restrictive measures.


The results using data from a real environment, formed by setting the air Flight Information Region of Brasilia (FIR-BS), managed by First Integrated Center for Air Defense and Air Traffic Control (CINDACTA I). Analysis of these results indicates that factors such as equity and financial costs can be used in conjunction with data congestion, without hurting safety standards, where the agent learns to suggest actions to the human controller, which take into account the impact generated by the actions taken .

 

17 CICERO ROBERTO FERREIRA DE ALMEIDA, 26/08/2010

This paper proposes as a solution to this problem, a model based on graph theory that employs multiple algorithms for maximum network flow. The model was created from a software prototype called Modulo Balancing Flow (MBF). The modeling of the entire solution is Brazilian air space through a network flow. The model performs the balancing traffic flow and distribution of aircraft by means of maximum flow algorithms Edmonds-Karp, Dinic, FIFO Preflow Highest Label Preflow Push Push.

The validation of the implementations of algorithms is performed based on tests using actual data from air movements to compute the balancing flow. The model uses two distribution policies slack in controlled spaces. The first policy distributes the slack equally among sectors of airspace, the second policy distributes the slack in order to prioritize specific routes.

The tests employ two databases with days of aerial movements of high and low flow of traffic in the years 2008 and 2009. The data for the formation of the real scenario of Brazilian air space were provided by the Air Navigation Management Center (CGNA). The analysis of the test results indicate that the waiting time on the ground the aircraft can be reduced in most cases without causing congestion in the sectors of airspace. In addition, the use of multiple algorithms proposed different options load distribution of traffic flow.

 

18. ANTONIO CRESPO, 26/10/2010

Under the system of the Brazilian Airspace Control (SISCEAB), the activity Management Air Traffic Flow (ATFM) is of crucial importance, especially when considered two extremely important aspects, namely: first, the impact of ATFM (tactical) activity in air traffic control, including the inherent security implications of operations, and, second, the possible consequences of ATFM measures on the airport logistics. Therefore, it becomes imperative to establish processes and / or systems development (computational agents) that help traffic flow managers (human agents) to take action optimized. In this context, this paper presents a comparative analysis of the control of air traffic flow generated by a computational agent (intelligent) based on reinforcement learning. The objective of this agent is to establish delays in schedules takeoff of aircraft departing from TMA determined, so that the sectors of air traffic control not congest or saturate, due to an imbalance and eventual momentary between demand and capacity. The paper includes a case study which compares the measurements generated by the agent autonomously generated and measures taking into account the experience of managers crowded air traffic flow in the Management Center of Air Navigation (CGNA).

 

19. CHARLES HENRIQUE GONÇALVES SANTOS, 15/08/2011

Uma proposta de modelagem ontológica para a Nomenclatura Comum do Mercosul (NCM). 15/08/2011. Dissertação (Mestrado em Sistemas Mecatrônicos) - Universidade de Brasília, . Orientador: Li Weigang.

 

A Nomenclatura Comum do Mercosul (NCM) é uma taxonomia utilizada pelo Brasil para a classificação de produtos, classificação esta considerada de grande importância no processo de importação e exportação de mercadorias, assim como nas operações de mercado interno. No entanto a NCM, da forma que está, não permite que a classificação de mercadorias seja uma prática simplificada, pois a atual nomenclatura utilizada no Mercosul encontra-se com os nome dos produtos desatualizados, os quais acabam por gerar sentidos dúbios. Uma forma de solucionar o problema seria com utilização do conceito de web semântica, que propicia a coleta de dados automáticos pela internet, e a ontologia, instrumento capaz de conceber novos sentidos aos termos existentes na NCM. Assim, a proposta de ontologia deste trabalho tem por objetivo fornecer novos domínios de interesse aos produtos da NCM, o que inclui sinônimos, línguas adicionais, restrições de comercialização dos produtos (importação e/ou exportação) bem como a possibilidade de correlacionar as leis existentes com os respectivos produtos. O estudo de caso realizado comprovou a eficiência da ontologia, uma vez que forneceu respostas esperadas a todos os questionamentos até então pendentes. Desta forma, comparando a atual taxonomia utilizada pelo Mercosul na classificação de seus produtos com a proposta deste trabalho, pode-se afirmar que a utilização do OntoNCM otimizou a forma de uso da Nomenclatura Comum do Mercosul.

 


20. ZHENG JIANYA, 09/03/2012

PageRank e W-entropia para analisar a influencia dos membros das redes sociais. 09/03/2012. Dissertação (Mestrado em Informática) - Universidade de Brasília, Coordenação de Aperfeiçoamento de Pessoal de Nível Superior. Orientador: Li Weigang.
With the fast development of the social networks, a suitable method should be developed to determine the influence of the people or brands in the communication of the modern e-society. Every social network has its ranking manner to display who is the most popular in that virtual society. However, this measurement is still incomplete and one-dimensional, such as the list ranked by the number of visits. In this research a new measurement model, W-entropy, is developed using information theory to study the influence of the individuals on multi-social networks. The real data are collected from Facebook, Twitter, YouTube and Google of the Internet. For comparing the effectiveness of the developed method, Famecount ranking is studied together using the same data. As the result, W-entropy method is suitable to reflect the uneven information distribution across the multi-platforms. This method is implemented in web (wentropia.com) to automatically present the ranking list of the related social network stars.

 

21. LEONARDO LUIZ BARBOSA VIEIRA CRUCIOL. 08/03/2012

Modelagem de apoio a decisão para o problema de espera no ar utilizando sistemas multiagentes e aprendizagem por reforços. 08/03/2012. Dissertação (Mestrado em Informática) - Universidade de Brasília, . Orientador: Li Weigang.

O Módulo do Problema de Espera no Ar (AHPM) é proposto como um Sistema de Suporte à Decisão para auxiliar os controladores de tráfego aéreo em seu gerenciamento. Este sistema é desenvolvido utilizando uma plataforma Multiagente para organizar e aperfeiçoar as soluções para os controladores lidarem com o fluxo de tráfego no espaço aéreo brasileiro. O artigo foca na resolução de problemas usando Aprendizagem por Reforço para mostrar como a experiência histórica dos controladores pode ser usada para auxiliar em novas situações. Foram estudados cerca de 1.110 voos diários sob a responsabilidade de duas Regiões de Informação de Voo (FIR), FIR-Brasília e FIR-Recife considerando as funções de avaliações locais e globais e três intervalos de tempo para aplicar medidas restritivas. Como resultado, com a utilização do AHPM, o processo de decisão obteve melhoria com um elevado nível de automação. Ao mesmo tempo, o congestionamento nos setores envolvidos foi reduzido de 15% à 57% em cenários locais e de 41% à 48% no cenário global.