Artificial Intelligence and Machine Learning

Leading textbooks on Artificial Intelligence define the field as the study of “intelligent agents”: any system that perceives its environment and takes actions that maximize the probability of achieving its goals.

David Mordecai, Samantha Kappagoda and John Shin Co-Authored Article “Objects May be Closer Than They Appear” in ABA SciTech Lawyer

David Mordecai, Samantha Kappagoda and John Shin Co-Authored Article “Objects May be Closer Than They Appear” in ABA SciTech Lawyer

David K.A. Mordecai, Samantha Kappagoda and John Y. Shin co-authored an article published in the Unintended Consequences (Fall 2022) Issue of the American Bar Association (ABA) SciTech Lawyer. The article is entitled Uncertainty and Reliability Implications of Computer Vision Depth Estimation for Vehicular Collision Avoidance and Navigation (Part 1 of 2).

Recent events accompanying increased adoption of machine learning applications of computer vision to safety-critical use-cases for cyberphysical systems has sharpened focus on the necessity of risk mitigation, reliability, safety, and security. An emergent risk domain across embedded cyberphysical systems involves the proliferation of camera-based autonomous driver assistance and vehicular navigation systems and the application of computer vision technology to perform the complex tasks of depth estimation, as well as object detection and image recognition. The first installment in this series will primarily focus on depth estimation tasks associated with operating and environmental conditions as well as spatial and temporal scales generally applicable to highway, rural and suburban settings.

ABA SciTech Lawyer endeavors to provide information related to current developments in law, science, medicine, and technology of professional interest to members of the ABA Section of Science & Technology Law.

Mordecai and Kappagoda are active members of the ABA Science and Technology (SciTech) Law Section. Ms. Kappagoda has been newly appointed as Vice-Chair of the Big Data Committee, and Vice-Chair of the Internet of Things Committee. She was reappointed, and continues to serve as Vice-Chair of the Insurance Technology and Risk Committee since 2019. Dr. Mordecai has been reappointed as Chair of the Space Law Committee, and Co-Chair of the Nanotechnology Committee, and previously also served as Vice-Chair of the Artificial Intelligence (AI) and Robotics from 2018 to 2022.

In addition, Dr. Mordecai has been an invited speaker at the AI & Robotics Institute in both 2021 and 2020. Both Kappagoda and Mordecai were invited speakers at the 2019 American Bar Association Annual Meeting and 34th Intellectual Property Law Conference (ABA-IPL) in Crystal City, Virginia.  Dr. Mordecai previously authored the article Automated Personal Assistants with Multiple Principals: Whose Agent Is It? in the Winter 2020 edition of ABA SciTech Lawyer.

Dr. Mordecai and Ms. Kappagoda are President and Chief Economist, respectively, of Risk Economics, Inc. Dr Mordecai is also Adjunct Professor of Econometrics and Statistics at the University of Chicago Booth School of Business, and Visiting Scholar at Courant Institute of Mathematical Sciences NYU, advising research at RiskEcon® Lab @ Courant Institute. Ms. Kappagoda is also Visiting Scholar at Courant Institute of Mathematical Sciences NYU, co-advising research at RiskEcon® Lab @ Courant Institute. Mr. Shin is Senior Research Associate (Enumeration Evaluation Lead) at Numerati Partners.

Legal Tech News Features David K.A. Mordecai as an Invited Panelist at ABA SciTech AI and Robotics National Institute

Legal Tech News Features David K.A. Mordecai as an Invited Panelist at ABA SciTech AI and Robotics National Institute

Legal Tech News featured Dr. David K.A. Mordecai as an invited panelist at the American Bar Association (ABA) SciTech Artificial Intelligence (AI) and Robotics National Institutes Conference, which was held on October 12-13, 2021.

David K.A. Mordecai and his co-panelists discussed ways in which data supply-chain activities might incur liability related to data acquisition, curation, warehousing, use, dissemination and agency. During the panel discussion, Dr. Mordecai highlighted the roles of statistics, economics and digital forensics for analyzing risk and liability exposure of data acquisition, collection and curation, to mitigate data and algorithmic bias and corresponding liability exposure, and noted “You cannot regulate something that you do not understand.”

David Mordecai currently serves as Chair of the Nanotechnology Committee, Vice-Chair of the Artificial Intelligence & Robotics Committee and Co-Chair of the Space Law Committee, of the ABA Science & Technology Law Section. Dr. Mordecai is President and Co-Founder of Risk Economics and Visiting Scholar at Courant Institute of Mathematical Sciences NYU, co-advising research activities at RiskEcon® Lab for Decision Metrics @ Courant Institute.

Legal Tech News Features David K.A. Mordecai as an Invited Panelist at ABA SciTech AI and Robotics National Institute

About Risk Economics
Risk Economics provides advisory services at the intersection of commercial business-process engineering and risk engineering with a particular focus on coupling commercial reinsurance and financial technology, through the rigorous application of agent-based, demographic, and statistical methodologies to microeconomic and macroeconomic analytics. The Risk Economics® client roster is diverse and includes governmental and quasi-governmental agencies, global insurance and reinsurance firms, leading law firms, technology firms, global banking institutions, asset management firms, multinational corporations with interests in natural resources, commodities and energy, as well as government agencies and regulators.

David K.A. Mordecai was Invited to Present at the 2021 ABA Artificial Intelligence and Robotics National Institutes

David K.A. Mordecai was Invited to Present at the 2021 ABA Artificial Intelligence and Robotics National Institutes

David K.A. Mordecai, President of Risk Economics, was invited to present at the American Bar Association (ABA) Artificial Intelligence and Robotics National Institutes conference on October 12-13, 2021. This year’s conference was held virtually due to the ongoing COVID-19 pandemic.

In the panel entitled Data Dump: How to Deal with a Heap of AI Big Data Liability and Compliance Issues, David K.A. Mordecai and his co-panelists discussed ways in which data supply-chain activities might incur liability related to data acquisition, curation, warehousing, use, dissemination and agency.  During the panel discussion, Dr. Mordecai highlighted the roles of statistics, economics and digital forensics for analyzing risk and liability exposure of data acquisition, collection and curation, to mitigate data and algorithmic bias and corresponding liability exposure.

David K.A. Mordecai was Invited to Present at the 2021 ABA Artificial Intelligence and Robotics National Institutes

David Mordecai is the Chair of the Nanotechnology Committee, Vice-Chair of the Artificial Intelligence & Robotics Committee and Co-Chair of the Space Law Committee, of the ABA Science & Technology Law Section.

About Risk Economics
Risk Economics provides advisory services at the intersection of commercial business-process engineering and risk engineering with a particular focus on coupling commercial reinsurance and financial technology, through the rigorous application of agent-based, demographic, and statistical methodologies to microeconomic and macroeconomic analytics.  The Risk Economics® client roster is diverse and includes governmental and quasi-governmental agencies, global insurance and reinsurance firms, leading law firms, technology firms, global banking institutions, asset management firms, multinational corporations with interests in natural resources, commodities and energy, as well as government agencies and regulators.
David K.A. Mordecai Authored Article Automated Personal Assistants with Multiple Principals: Whose Agent Is It? in ABA SciTech Lawyer

David K.A. Mordecai Authored Article Automated Personal Assistants with Multiple Principals: Whose Agent Is It? in ABA SciTech Lawyer

David K.A. Mordecai authored an article published on January 17, 2020 in the Winter edition of the American Bar Association (ABA) SciTech Lawyer entitled Automated Personal Assistants with Multiple Principals: Whose Agent Is It?

The term automated personal assistant (i.e., virtual assistant) commonly refers to mobile software agents that perform tasks or services on behalf of an individual (i.e., the device user or application user) based on a combination of user input, location awareness, and the ability to access information from a variety of online sources (e.g., weather conditions, traffic congestion, news, stock prices, user schedules, retail prices, etc.). This article highlights some open questions and foundational principles relevant to contract and tort liability implications of software agency in this context.

ABA SciTech Lawyer endeavors to provide information related to current developments in law, science, medicine, and technology of professional interest to members of the ABA Section of Science & Technology Law.

David K.A. Mordecai Authored Article Automated Personal Assistants with Multiple Principals: Whose Agent Is It? in ABA SciTech Lawyer

David Mordecai is President of Risk Economics, Inc. He is an active ABA member, has been a speaker at ABA events and serves as Vice-Chair of the Science & Technology Law Section Artificial Intelligence and Robotics Committee as well as the Nanotechnology Committee.

David K.A. Mordecai was Invited to Present at the ABA Artificial Intelligence and Robotics National Institutes

David K.A. Mordecai was Invited to Present at the ABA Artificial Intelligence and Robotics National Institutes

David K.A. Mordecai, President of Risk Economics, was invited to present at the American Bar Association (ABA) Artificial Intelligence and Robotics National Institutes conference on January 9-10, 2020 in Santa Clara, CA.

In the panel entitled Investigations in the Era of AI, David K.A. Mordecai discussed practical implications of mathematical statistics, computational inference, machine learning and Artificial Intelligence (AI) in the context of large-scale, data-intensive technical investigations, e.g. algorithmic trading platforms, as well as for very large and highly complex litigations across finance and insurance, among other sectors:

  • Computational Forensics and technical aspects of evidentiary burden and standards for admissibility and weight of machine testimony and machine behavior
  • Principles of machine testimony and digital forensics: data adequacy, data sufficiency and representativeness
  • Inherent limitations of data and algorithms: data bias, sampling bias, algorithmic bias, software and hardware errors
  • Foundational technical principles and the fundamental forensic reliability of evidence and scientific analysis

Dr. Mordecai served on the planning committee for this first-of-a-kind National Institute, assisting in shaping the agenda and speaker lineup. He is a regular speaker at ABA events and serves as the Vice-Chair of the ABA Science & Technology Law Section’s Artificial Intelligence and Robotics Committee and also the Nanotechnology Committee.

David K.A. Mordecai was Invited to Present at the ABA Artificial Intelligence and Robotics National Institutes

Samantha Kappagoda and David K.A. Mordecai were Invited to Participate in CSIS Roundtable Series on Artificial Intelligence and National Security Applications

Samantha Kappagoda and David K.A. Mordecai were Invited to Participate in CSIS Roundtable Series on Artificial Intelligence and National Security Applications

Ms. Samantha Kappagoda and Dr. David K.A. Mordecai were invited to participate in a series of closed senior-level roundtable strategy and policy discussions regarding Artificial Intelligence (AI) and applications to national security. The sessions were hosted by Center for Strategic and International Studies on April 18, June 20, and July 25, 2018, at its Washington, D.C. headquarters. The discussions were held under Chatham House Rules, and supported by an ongoing CSIS study funded by Thales USA.

CSIS described the three sessions topics as follows:

The April 18, 2018 session primarily focused on establishing a conceptual framework for the application of AI to national security issues. In many national security discussions, establishing a clear and precise conceptual framework is critical in order to adequately differentiate AI applications from related but distinct technologies, e.g., unmanned systems (satellites and drones), the internet of things, robotics, and quantum technologies (i.e., computing, sensing, communication, encryption). In addition, this session sought to identify and articulate the roles for the adoption of AI, and to categorize the preexisting national security applications of AI.

The June 20, 2018 session primarily focused on issues related to the adoption of AI technologies and techniques within the Department of Defense (DoD), characterizing the current state of AI adoption in national security applications across a range of areas, e.g., logistics and intelligence, as well as a brief comparison of the state of adoption in the private sector, i.e., near term trends in adoption as well as enablers and barriers to continuing the trajectory for the DoD. Furthermore, this workshop explored incentives that might facilitate or discourage the adoption of AI technologies, identify adoption-related disruptions, and address how such disruptions might be mitigated.

The July 25, 2018 session primarily focused on issues related to the operational management of AI techniques in national security applications:

  • The current state of management of AI as well as pathways to operational capability for future programs,
  • Requirements presented in the use of AI in the national security mission space e.g., data ownership, software acquisition, network risk management, and user needs at the tactical to strategic level, and
  • Issues of trust, verification, and reliability in AI systems, as well as the applicability of existing guidelines and practices to these systems.

Furthermore, this session introduced broader discussion regarding policy opportunities in guiding the ethical implementation and operational management of AI.

About CSIS
The Center for Strategic and International Studies (CSIS) is a bipartisan, nonprofit policy research organization dedicated to advancing practical ideas for addressing challenges. Founded in 1962, the stated purpose of CSIS is to define the future of national security, guided by a distinct set of values – non-partisanship, independent thought, innovative thinking, cross-disciplinary scholarship, integrity and professionalism, and talent development.