If anyone care to see the research that has been done by the team, we gladly present it here:
Emociones artificiales usando mecánica cuántica
Abstract
Actualmente, los robots y sistemas inteligentes a´un no tienen la capacidad de actuar realmente como humanos, esto debido a que nosotros no solo utilizamos nuestro razonamiento para tomar decisiones, sino que también dependemos de nuestras emociones. Por esta razón es que se ha vuelto importante desarrollar sistemas que puedan razonar pero también que puedan mostrar emociones. En este artículo se presenta el desarrollo de un sistema de emociones y personalidades artificiales en el que se hacen analogías con un sistema de mecánica cuántica (MC). Se describir´a primero la motivación para desarrollar dicho sistema, seguido de un poco del marco teórico necesario para comprender el enfoque de la solución. Después se presenta el desarrollo de la solución y experimentación hecha sobre el sistema para probar su funcionalidad.
Actualmente, los robots y sistemas inteligentes a´un no tienen la capacidad de actuar realmente como humanos, esto debido a que nosotros no solo utilizamos nuestro razonamiento para tomar decisiones, sino que también dependemos de nuestras emociones. Por esta razón es que se ha vuelto importante desarrollar sistemas que puedan razonar pero también que puedan mostrar emociones. En este artículo se presenta el desarrollo de un sistema de emociones y personalidades artificiales en el que se hacen analogías con un sistema de mecánica cuántica (MC). Se describir´a primero la motivación para desarrollar dicho sistema, seguido de un poco del marco teórico necesario para comprender el enfoque de la solución. Después se presenta el desarrollo de la solución y experimentación hecha sobre el sistema para probar su funcionalidad.

ks52 Emociones artificiales usando mecánica cuántica.pdf | |
File Size: | 181 kb |
File Type: |
Use of Graphs with Bayesian Networks and Q-Learning for Pass-Into-Space Decision Making in RoboCup Soccer Simulation 2D
Abstract
This thesis analyzes the problem of decision making in a soccer environment, specically that of the RoboCup 2D simulation. When an agent has the ball, he must make the decision to pass the ball, run with it, shoot or wait for a better opportunity. Furthermore, when passing the ball this can be done directly to an agent or a pass into space can be used. A pass into space is a pass done into an empty place of the field, where a teammate will run to and get the ball. This particular problem is of special interest since it involves cooperation between the agents in order to make plays that lead into a goal.
In order to solve this problem, it is proposed the implementation of a Bayesian Network to establish the weights of a graph in order to use it for decision making in the RoboCup 2D environment. Once the Bayesian Network is used to obtain the weights, the minimum spanning tree is obtained and a dierent network is used to recalculate the weights in order to make decisions that avoid suboptimal decisions for the game. Also, Q-learning is used to let the agents update their networks in order to learn to play against the opponent they are facing. For this work, oine learning was implemented.
To test the proposed method, games were played between teams that had the same basic structure. Yet the decision making algorithm is dierent for each. The one proposed in this thesis is revised rst without the use of reinforcement learning against a team that uses a graph for decision making. Games are then played implementing Q-learning when playing against the team using only graphs. Finally, in order to prove that it avoids suboptimal decisions, games are played using only one Bayesian Network for calculating the weights of the graph, one more time versus the team using only a graph. To test how well the team fares when using the proposed method, several statistics are measured on each game: ball possession percentage, goal ratio and dierence, total and completed passes.
It is shown in this work that the use of this approach for graphs can greatly improve the results obtained in decision making that when using a simple weighted graph for this task. As well as the advantage gained from training a team's network using Q-learning.
This thesis analyzes the problem of decision making in a soccer environment, specically that of the RoboCup 2D simulation. When an agent has the ball, he must make the decision to pass the ball, run with it, shoot or wait for a better opportunity. Furthermore, when passing the ball this can be done directly to an agent or a pass into space can be used. A pass into space is a pass done into an empty place of the field, where a teammate will run to and get the ball. This particular problem is of special interest since it involves cooperation between the agents in order to make plays that lead into a goal.
In order to solve this problem, it is proposed the implementation of a Bayesian Network to establish the weights of a graph in order to use it for decision making in the RoboCup 2D environment. Once the Bayesian Network is used to obtain the weights, the minimum spanning tree is obtained and a dierent network is used to recalculate the weights in order to make decisions that avoid suboptimal decisions for the game. Also, Q-learning is used to let the agents update their networks in order to learn to play against the opponent they are facing. For this work, oine learning was implemented.
To test the proposed method, games were played between teams that had the same basic structure. Yet the decision making algorithm is dierent for each. The one proposed in this thesis is revised rst without the use of reinforcement learning against a team that uses a graph for decision making. Games are then played implementing Q-learning when playing against the team using only graphs. Finally, in order to prove that it avoids suboptimal decisions, games are played using only one Bayesian Network for calculating the weights of the graph, one more time versus the team using only a graph. To test how well the team fares when using the proposed method, several statistics are measured on each game: ball possession percentage, goal ratio and dierence, total and completed passes.
It is shown in this work that the use of this approach for graphs can greatly improve the results obtained in decision making that when using a simple weighted graph for this task. As well as the advantage gained from training a team's network using Q-learning.

Use of Graphs with Bayesian Networks and Q-Learning for Pass-Into-Space Decision Making in RoboCup Soccer Simulation 2D.pdf | |
File Size: | 821 kb |
File Type: |
Bayesian Networks for Micromanagement Decision Imitation in the RTS Game Starcraft
Abstract
Real time strategy (RTS) games provide various research areas for Articial Intelligence. One of these areas involves the management of either individual or small group of units, called micromanagement. This research provides an approach that implements an imitation of the player's decisions as a mean for micromanagement combat in the RTS game Starcraft. A bayesian network is generated to t the decisions taken by a player and then trained with information gather from the player's combat micromanagement. Then, this network is implemented on the game in order to enhance the performance of the game's built-in Articial Intelligence module. Moreover, as the increase in performance is directly related to the player's game, it enriches the player's gamingexperience. The results obtained proved that imitation through the implementation of bayesian networks can be achieved. Consequently, this provided an increase in the performance compared to the one presented by the game's built-in AI module.
Real time strategy (RTS) games provide various research areas for Articial Intelligence. One of these areas involves the management of either individual or small group of units, called micromanagement. This research provides an approach that implements an imitation of the player's decisions as a mean for micromanagement combat in the RTS game Starcraft. A bayesian network is generated to t the decisions taken by a player and then trained with information gather from the player's combat micromanagement. Then, this network is implemented on the game in order to enhance the performance of the game's built-in Articial Intelligence module. Moreover, as the increase in performance is directly related to the player's game, it enriches the player's gamingexperience. The results obtained proved that imitation through the implementation of bayesian networks can be achieved. Consequently, this provided an increase in the performance compared to the one presented by the game's built-in AI module.

Bayesian Networks for Micromanagement Decision Imitation in the RTS Game Starcraft.pdf | |
File Size: | 243 kb |
File Type: |