AI development is an essential part of digital transformation. However, without the proper guidelines, it can eradicate our human rights, not support them. To prevent this grim scenario from unfolding, the European Union has released a set of guidelines for ethical AI development.
The first draft was published back in December 2018. Since then, over 500 comments were posted, all of which were taken into consideration when revising these. The following fundamental principles need to be at the forefront of every AI development project, according to the EU:
- Oversight and human agency
All AI systems should support fundamental human rights and never decrease human autonomy.
- Safety and robustness
AI algorithms should be able to handle unforeseen errors and inconsistencies.
- Data governance and privacy
Citizens should have full control over their data, knowing it won’t be used against them or in a discriminatory fashion.
All AI systems should be traceable on demand.
- Fairness, diversity and non-discrimination
Every AI system should take into account all human abilities, skills and requirements for the sake of accessibility.
- Environmental and societal well-being
Positive social change and ecological responsibility should be at the forefront of system priorities.
There should be safeguards that ensure accountability and responsibility for AI systems.
The proposal also suggests paying special attention to vulnerable individuals, including children, the elderly and the disabled. While the above-listed guidelines are not legally-binding, academics, human rights groups, developers, and businesses should find them acceptable.