Our Activities
  • Home
  • Our Activities
  • Digital Society Research Institute
  • Study Group on Issues and Policies in the Application of AI (2nd...

Study Group on Issues and Policies in the Application of AI (Phase 2)

AI-embedded products and services ("AI systems") are rapidly penetrating all aspects of society, from internal organizational applications such as research and documentation, to regulated domains such as financial services, to automated driving and medical devices that can directly affect human life and death. These AI systems are greatly improving our productivity and quality of life, and are now bringing about irreversible changes in society.

At the same time, as the number of implementation scenarios for AI systems expands, the domain, quality, and quantity of risks posed by AI are also expanding. Risks include profiling based on large amounts of personal information collected, inducing malicious behavior, malfunction of self-driving cars and medical devices, discrimination and increased inequality of opportunity due to amplified bias, and security risks due to easier access to CBRN (chemical, biological, radiological, and nuclear) information, deepfakes, threats to individual dignity and democracy, and many others. One way to manage such risks is through certification of AI systems.

As an outcome of the first phase of the study group, which was established last year, we have organized the legal framework for certification and evaluation of AI that has already been established in various countries, and found that the certification system for AI is still in its infancy worldwide, highlighting the challenges (see "Comparison of Certification Systems for AI Systems. Legal Considerations"); see also "Legal Considerations.The AI Study Group] Study Group Commissioned Research Report "Comparative Legal Study on Certification Systems for AI Systems" is now available | Center for the Promotion of International Economic Partnerships (CIPA)

The second phase of the AI research group this year aims to analyze the state-of-the-art use cases in the fields where AI systems are used and the status of domestic and international institutional development, and to present a model of certification mechanism that can be used as a cross-disciplinary reference for the certification of AI systems (including third-party certification, self-certification, and Verification and Validation), thereby contributing to the enhancement of Japan's industrial competitiveness in the AI society. The purpose of this study is to present a model of certification mechanism that can be used as a cross-sectoral reference for the certification of AI systems (including third-party certification, self-certification, and verification and validation), and thereby contribute to strengthening Japan's industrial competitiveness in the AI society.

Study Group Members(Honorifics omitted. (Honorifics omitted, in alphabetical order. Positions are as of July 2025, when the study group was established)

Chair:
Hiroki Habuka
(Research Professor, Graduate School of Law, Kyoto University / Attorney-at-Law)
Committee Member:
Koichi Ito
(Partner, Certified Public Accountant / PwC Japan LLC)
Tatsuhiko Inatani
(Professor, Graduate School of Law, Kyoto University)
Takafumi Ochiai
(Attorney-at-Law (Senior Partner), Atsumi & Sakai / Co-Founder and Representative Director, Smart Governance Inc.)
Ryoichi Sugimura
(Chief Collaboration Officer / National Institute of Advanced Industrial Science and Technology (AIST), Information and Ergonomics Area)
Chen, Kwan-Wei
(Specified Assistant Professor / Graduate School of Law, Kyoto University)
Zheng Yeochang
(Senior Research Manager / AI Security Core Project, Data & Security Laboratory, Fujitsu Limited)
Kumiko Takahashi
(Senior Researcher / Social Infrastructure Business Division, Mitsubishi Research Institute, Inc.)
Keisuke Tomiyasu
(CTO, Head of R&D Division / AI Medical Services, Inc.)
Takayuki Hirose
(Specified Lecturer / Graduate School of Law, Kyoto University)
Observer:
David Socol de la Osa David Uriel
(Associate Professor / Hitotsubashi University, School of Advanced Social Sciences)
Hiroki Takamura
(Chief / Information-technology Promotion Agency, Japan (IPA), Certification System Development and Promotion Office)

1st meeting
July 23, 2025 15:00-17:00
Mr. Habuka explained that the "joint certification" system, which evaluates organizations and management measures in an integrated manner, is effective as a method for managing AI risks, which are becoming more diverse and serious, and gave examples of adoption of the "joint certification" approach in the EU, US, and Japan.
The study group agreed to (1) consider not only third-party certification and self-certification as conformity assessment methods, but also other methods (Verification and Validation, inspection, and testing), (2) in considering the target of certification, study the infrastructure model that constitutes an AI system and the boundary points of AI services that utilize it, and (3) analyze use cases of AI systems in individual fields, such as automatic driving and medical devices, and study models of certification mechanisms that can be used as cross-field reference. (3) to analyze use-cases of AI systems in individual fields, such as automatic driving and medical devices, and to study models of authentication mechanisms that can be used as a cross-field reference.