Data e ora inizio evento:
Data e ora fine evento:
Sede:
Dipartimento SBAI
Aula esterna:
Aula 1B, RM002
Speaker ed affiliazione:
Adriano Barra
Timetable: February, every Mon, Wed and Fri (with two exceptions) at 15:00 in Room 1B, RM002, SBAI Department. After a streamlined historical introduction (e.g. the Turing machine, Rosenblatt's perceptron and AI’s winter time), the course focuses on the information processing capabilities of modern neural networks, both those biologically inspired (e.g. the Hopfield model and its variations on the theme) as well as those not-biologically driven (Boltzmann machines and feed-forward networks), with the related algorithms for learning and automatic recognition (e.g., Hebbian learning, contrastive divergence, back -prograpation, etc.). The methodological tools will be heavily based on the statistical mechanical formalization of neural networks and they will be discussed in every detail to inspect their emergent properties (i.e. those not immediately deductible by looking at the behavior of a single neuron). Specifically, we will try to understand how these networks are able to learn and abstract by looking at supplied examples from the external world and how, subsequently, they use what they have learned to respond appropriately, if stimulated, to the external world. We will also understand how these neural networks can sometimes make mistakes, and why.
Contatti/Organizzatori:
lorenzo.giacomelli@uniroma1.it