Invited Speakers

The Evolution of AI: How Medical Applications Have Stimulated and Guided the Field

Date and Time: Sunday, May 19th at 9:00 AM

Room:  TBA

Ted Shortliffe 

Chair Emeritus & Adjunct Professor of Biomedical Informatics, Vagelos College of Physicians and Surgeons, Columbia University in the City of New York
Abstract: Five decades have passed in the evolution of Artificial Intelligence (AI) and its medical applications. Medical AI has evolved substantially while tracking the corresponding changes in computer science, hardware technology, communications, and biomedicine itself. Emerging from medical schools and computer science departments in its early years, the AI in Medicine (AIM) field is now more visible and influential than ever before, paralleling the enthusiasm and accomplishments of AI and data science more generally.  This talk will briefly summarize some of AIM history, relating it to the themes in AI itself and providing an update on the status of the field as it enters its second half-century. The inherent complexity of medicine and of clinical care necessitates that AIM research and development address not only decision-making or analytical performance but also issues of usability, workflow, transparency, ethics, safety, and the pursuit of persuasive results from formal clinical trials. These requirements contribute to an ongoing investigative agenda for AIM R&D and are likely to continue to influence the evolution of AI itself. 
Bio: Ted Shortliffe is Chair Emeritus and Adjunct Professor of Biomedical Informatics at Columbia University’s Vagelos College of Physicians and Surgeons.  He also holds adjunct appointments at Arizona State University and Weill Cornell Medical College. Both a PhD computer scientist and a physician who has practiced internal medicine, Dr. Shortliffe is well known for his early work on the MYCIN expert system and its successor, ONCOCIN, which introduced graphical workstations to the medical AI field. Editor Emeritus of the Journal of Biomedical Informatics (Elsevier), he is also an editor of two Springer textbooks Biomedical Informatics: Computer Applications in Health Care and Biomedicine (5th edition, 2021) and Intelligent Systems in Medicine and Health: The Role of AI (2022). In the past Dr. Shortliffe served as President/CEO of the American Medical Informatics Association (AMIA) and was founding Dean of the University of Arizona College of Medicine in Phoenix. He has spearheaded the formation and evolution of graduate degree programs in biomedical informatics at Stanford, Columbia, and Arizona State Universities.  An elected member of the National Academy of Medicine and the American Institute for Medical and Biological Engineering, he is also a fellow of the American College of Medical Informatics (ACMI), the International Academy of Health Sciences Informatics, and the Association for the Advancement of Artificial Intelligence. He was elected a Master of the American College of Physicians in 2002 and has also received the Association of Computing Machinery’s Grace Murray Hopper Award for his MYCIN work (1976), ACMI’s Morris F. Collen Award (2006), and the International Medical Informatics Association’s François Grémy Award of Excellence (2021).  

Teaching Robots To "Get It Right"

Date and Time: Monday, May 20th at 9:00 AM

Room:  TBA

Joydeep Biswas

Associate Professor, Computer Science Department, UT Austin
Abstract: We are interested in building and deploying service mobile robots to assist with arbitrary end-user tasks in everyday environments. In such open-world settings, how can we ensure that robots 1) are robust to environmental changes; 2) navigate the world in ways consistent with social and other unwritten norms; and 3) correctly complete the tasks expected of them? In this talk, I will survey these technical challenges, and present several promising directions to address them. To "get it right", robots will have to reason about unexpected sources of failures in the real world and learn to overcome them; glean appropriate contextual information from perception to understand how to navigate in the world; and reason about what correct task execution actually entails.
Bio:  Joydeep Biswas is an associate professor in the Department of Computer Science at the University of Texas at Austin. He earned his B.Tech in Engineering Physics from the Indian Institute of Technology Bombay in 2008, and his M.S. and Ph.D. in Robotics from Carnegie Mellon University in 2010 and 2014 respectively. From 2015 to 2019, he was an assistant professor in the College of Information and Computer Sciences at the University of Massachusetts Amherst. His research spans perception and planning for long-term autonomy, with the ultimate goal of having service mobile robots deployed in human environments for years at a time, without the need for expert corrections or supervision. Prof. Biswas received the NSF CAREER award in 2021, an Amazon Research Award in 2018, and a JP Morgan Faculty Research Award in 2018 and 2024.

Challenges and Opportunities of AI/ML and Autonomy for the Navy

Date and Time: Tuesday, May 21st at 9:00 AM

Room:  TBA

John Seel

Head of NSWC Dahlgren Division Warfare Control and Integration  Department
Abstract: Over the last dozen years, advances in machine learning have heralded and accelerated new generations of AI breakthroughs with much of the innovation happening outside DoD and government. Since, the US’ ability to compete in the 21st century depends, in part, on US leadership in data, analytics, and AI, DoD’s task is to adopt these innovations wherever they can add the most military value and drive their diffusion across the enterprise. This talk will discuss the Navy’s approach to AI adoption and its hierarchy of AI needs and will emphasize the aspects of the Navy and its mission that shape the environment for and demands on desired AI solutions.
Bio:  Dr. John Seel, SSTM, is the distinguished scientist/engineer for Naval Data Sciences. In this role, he is currently focused on improving AI/ML, autonomy and data science capabilities within the surface navy and improving autonomous capabilities of unmanned systems.
Prior to assuming these duties, Dr. Seel served as department head of the Weapons Control and Integration Department at Naval Surface Warfare Center Dahlgren Division (NSWCDD). During his time as department head, he built and nurtured NSWCDD capability across the department portfolio, with a strong focus on data science, autonomy, and AI/ML.
Before becoming a department head, he served as the distinguished scientist/engineer for Warfare Systems Software SSTM. In this role, Dr. Seel led research in the areas of advanced computing technologies, including multicore determinism for weapon systems, computing architectures, reusable software frameworks, design patterns, development of automated test tools, large-scale data fusion and pattern recognition algorithms, and prototyping of autonomous decision algorithms and technologies.  
Dr. Seel has also served as the deputy chief engineer for Nuclear Weapons Surety and branch head of the Nuclear Weapons Surety Technical Authority Branch for Strategic Systems Programs and at the Office of the Secretary of Defense, where Dr. Seel was the director of the Emerging Capabilities Division of the Rapid Reaction Technology Office in the Office of the Deputy Assistant Secretary of Defense (Rapid Fielding). 
Dr. Seel has led teams in developing operational technologies, products and doctrine resulting from operational user requirements and mission needs statements. He applied a multi-discipline approach to identify technical solutions and transitions to the acquisition community including the deploying numerous systems to Iraq and Afghanistan, the United States Marine Corps Infrastructure Disruption Guide, Signals Intelligence projects, and other projects with varying degrees of hardware, software and soft systems (human, political, and economic).  He has served as a member of the team that drafted Critical Infrastructure Protection documentation for the White House, which provided threat support by identifying future vulnerabilities to United States Critical Infrastructure. He led teams working on stability operations and support operations on behalf of the Joint Chiefs of Staff. 
Prior to joining the civil service, Dr. Seel served in the U.S. Army.
He is a graduate of the Defense Language Institute in Monterey, California. He earned a bachelor’s in computer science from Tennessee Technological University, a master’s in systems engineering from George Mason University and a doctorate of philosophy in systems engineering from George Washington University.
Dr. Seel’s awards include the Meritorious Civilian Service Award and the Defense Meritorious Service Medal.

Special Track Invited Speakers 

Neural Network Hardware Acceleration 

Neural Networks and Data Mining Special Track

Date and Time: TBA 

Room:  TBA

David Bisant

Central Security Service
Abstract: Deep learning models require hardware acceleration. The current thirst for this acceleration is exceeding current capabilities and reality.  At current trends, by 2045, one half of the world's electricity will be consumed by training deep learning models.  This talk will cover background and a history of the field, the acceleration which is currently available, and what is expected in the future. 
Bio:  Dr. David Bisant has over 30 years of experience in neural networks, machine learning, and the application of these algorithms to problems in engineering and natural sciences. He has received training at Colorado State University, the University of Maryland, George Washington University, and Stanford University. He has held past positions at Medtronic and Stanford University. He is currently a member of the Central Security Service, where he works in the fields of high performance computing, physical science research, and defense.  He has been both a contributor and organizer of the FLAIRS Conference.  He has cochaired a number of special tracks, primarily the Neural Network and Data Mining Special Track, which he has cochaired for the last 18 years.

Explaining the Decisions of Learned Classifiers

Uncertainty Reasoning Special Track

Date and Time: TBA 

Room:  TBA

Adnan Darwiche

Professor, Computer Science, UCLA
Abstract: A central quest in explainable AI relates to understanding the decisions made by learned classifiers. There are three dimensions to this understanding that have been receiving significant attention in recent years. The first dimension relates to characterizing necessary and sufficient conditions for decisions, therefore providing abstractions of instances that can be viewed as the ``reasons behind decisions.’' The second dimension relates to identifying minimal sufficient conditions for a decision, therefore characterizing minimal aspects of an instance that can justify the corresponding decision. The third dimension relates to identifying minimal necessary conditions for a decision and, hence, characterize minimal perturbations to the instance that will yield an alternate decision. I will discuss in this talk a theory of explainable AI that targets these questions and show how it can be applied to a broad class of classifiers, including Bayesian networks, decision trees, random forests and some types of neural networks.
Bio: Adnan Darwiche is a professor and former chairman of the computer science department at UCLA. He directs the automated reasoning group which focuses on the theory and practice of probabilistic and logical reasoning, and their applications to machine learning and explainable AI. His group is responsible for publicly releasing a number of high profile reasoning systems (http://reasoning.cs.ucla.edu/). Professor Darwiche is a Fellow of AAAI and ACM, a former Editor-in-Chief of the Journal of Artificial Intelligence Research (JAIR), and author of “Modeling and Reasoning with Bayesian Networks,” by Cambridge University Press, 2009. Professor Darwiche founded the “Beyond NP” initiative in 2015 (http: //beyondnp.org/). Many of his works are featured at https://youtube.com/@UCLA.Reasoning