
Dear all, We are going to have 2 AI seminars in the coming week. The first one will be on *"**AI and O.R. for Environmental Sustainability* *" *by Bistra Dilkina and is scheduled to be on October 7th (Tomorrow), 1-2 PM PST. It will be followed by a 30-minute Q&A session with the graduate students. This is an *in-person* event and will be held at *KEC 1001 (The previous mail said this would be at a different location. Please disregard that).* *AI and O.R. for Environmental Sustainability* Bistra Dilkina Associate Professor of Computer Science Co-Director, USC Center of AI in Society University of Southern California *Abstract:* With the increasing anthropogenic pressures of urbanization, agriculture, deforestation, other socio-economic drivers as well as climate change, biodiversity and habitat conservation is a key sustainable development goal. Techniques from AI and O.R. and their hybridization have an important role to play in providing both predictive and prescriptive tools to inform critical decision-making, which can help us do more with less in this important application domain. A prime example of the field of Computational Sustainability, this presentation will give several successful examples of the two-way street of research providing useful domain solutions to real-world problems, while advancing core methodology in AI and O.R. Key examples include using deep learning and satellite data for land cover mapping, predicting species distributions under climate change and optimizing spatial conservation planning, as well as developing data-driven techniques to curb illicit wildlife poaching and trafficking. *Speaker Bio:* Dr. Bistra Dilkina is an associate professor of computer science at the University of Southern California, co-director of the USC Center of AI in Society, and the inaugural Dr. Allen and Charlotte Ginsburg Early Career Chair at the USC Viterbi School of Engineering. Her research and teaching center around the integration of machine learning and discrete optimization, with a strong focus on AI applications in computational sustainability and social good. She received her Ph.D. from Cornell University in 2012 and was a post-doctoral associate at the Institute for Computational Sustainability. Her research has contributed significant advances to machine-learning-guided combinatorial solving including mathematical programming and planning, as well as decision-focused learning where combinatorial reasoning is integrated in machine learning pipelines. Her applied research in Computational Sustainability spans using AI for wildlife conservation planning, using AI to understand the impacts of climate change in terms of energy, water, habitat and human migration, and using AI to optimize fortification of lifeline infrastructures for disaster resilience. She has over 90 publications and has co-organized or served as a chair to numerous workshops, tutorials, and special tracks at major conferences. This will be followed by a talk on Monday, October 10th, 2022. The second talk will be on *"Rigorous Experimentation For Reinforcement Learning" *by Scott Jordan and is scheduled to be on October 10th, 1-2 PM PST. It will be followed by a 30-minute Q&A session with the graduate students. This is an *in-person* event and will be held at *KEAR 305* *Rigorous Experimentation For Reinforcement Learning* Scott Jordan Postdoc University of Alberta *Abstract:* Scientific fields make advancements by leveraging the knowledge created by others to push the boundary of understanding. The primary tool in many fields for generating knowledge is empirical experimentation. Although common, generating accurate knowledge from empirical experiments is often challenging due to inherent randomness in execution and confounding variables that can obscure the correct interpretation of the results. As such, researchers must hold themselves and others to a high degree of rigor when designing experiments. Unfortunately, most reinforcement learning (RL) experiments lack this rigor, making the knowledge generated from experiments dubious. This dissertation proposes methods to address central issues in RL experimentation. Evaluating the performance of an RL algorithm is the most common type of experiment in RL literature. Most performance evaluations are often incapable of answering a specific research question and produce misleading results. Thus, the first issue we address is how to create a performance evaluation procedure that holds up to scientific standards. Despite the prevalence of performance evaluation, these types of experiments produce limited knowledge, e.g., they can only show how well an algorithm worked and not why, and they require significant amounts of time and computational resources. As an alternative, this dissertation proposes that scientific testing, the process of conducting carefully controlled experiments designed to further the knowledge and understanding of how an algorithm works, should be the primary form of experimentation. Lastly, this dissertation provides a case study using policy gradient methods, showing how scientific testing can replace performance evaluation as the primary form of experimentation. As a result, this dissertation can motivate others in the field to adopt more rigorous experimental practices. *Speaker Bio:* Scott Jordan got his Batchelor's degree from Oregon State in 2015 while working with Tom Dietterich. He recently got his Ph.D. from the University of Massachusetts and is now a Postdoc at the University of Alberta working with Martha White. His research focuses on reinforcement learning with the goal of understanding the properties necessary for scalable and effective sequential decision-making, with works published in ICML, NeurIPS, and other ML venues. His dissertation addresses issues with poor experimentation practices common in reinforcement learning research. *Please watch this space for future AI Seminars :* * https://eecs.oregonstate.edu/ai-events <https://eecs.oregonstate.edu/ai-events>* Rajesh Mangannavar, Graduate Student Oregon State University ---- AI Seminar Important Reminders: -> For graduate students in the AI program, attendance is strongly encouraged.