Ethical Issues and Social Acceptance of AI-Based Health Risk Prediction Tools
Title
Ethical Issues and Social Acceptance of AI-Based Health Risk Prediction Tools
Alternative Author(s)
Lim, Jae Kang
Keyword
Artificial Intelligence
; Health Risk Prediction
; Medical AI
; Ethical Issues
; Social Acceptance
; Transparency
Publication Year
2025-09-30
Publisher
Korea Institute for Health and Social Affairs
Citation
Health and Social Welfare Review Vol.45 No.3, pp.43-67
Abstract
This study conducts a comparative analysis of the ethical issues and social acceptance of artificial intelligence (AI)-based health risk prediction tools in the United States, the European Union (EU), South Korea, and China. While such tools offer transformative potential for early disease detection, personalized treatment, and healthcare resource optimization, they also pose significant ethical challenges, including concerns over data governance (privacy, security, consent), algorithmic integrity (bias, fairness), transparency and explainability, accountability, and the risk of discrimination. The findings indicate that while these regions share common concerns, they differ in their normative priorities: the EU emphasizes fundamental rights protection, the US focuses on health equity, China prioritizes social stability, and South Korea seeks a balance between innovation and regulation. Social acceptance also varies widely: China shows high optimism for medical AI, while the West expresses concerns, and South Korea remains cautious. Regulatory approaches reflect these differences, with the EU adopting comprehensive ex-ante regulation, the US favoring a sector-specific approach, China pursuing goal-oriented control, and South Korea implementing a risk-based framework. These variations are shaped by cultural norms, political structures, economic priorities, and technological readiness, offering key insights for global AI governance, international cooperation, and cross-border policy alignment.