The responsibility for regulating AI usage is shared across multiple stakeholders, including governments, international organizations, tech companies, and academic institutions. Governments, such as the European Union with its AI Act, are working to establish frameworks that ensure AI is developed and used ethically, focusing on transparency, accountability, and safety. International bodies like the OECD and UNESCO are also guiding global AI ethics through their recommendations and guidelines. Additionally, tech companies are creating internal ethics committees and adhering to self-regulatory standards to ensure responsible AI development. Measures in place to ensure ethical AI deployment include mandatory risk assessments for high-risk AI systems, data privacy protections, and fairness audits to prevent biases. Ethical AI research, transparency in algorithms, and collaboration among stakeholders are essential to ensure AI is used for the benefit of society while minimizing harm.