New guidelines on the ethics and governance of artificial intelligence in large and Multimedia models

Issuing new guidelines on the ethics and governance of large and Multimedia AI models – a rapidly developing artificial intelligence technology, the applications of which are used in various fields of healthcare.

The guidelines include more than 40 recommendations for governments, technology companies and healthcare providers, to ensure the optimal use of AI models to promote and protect the health of the population.

AI models have the ability to process diverse inputs such as texts, videos, and images, and generate multiple outputs, while simulating human communication style and completing non-pre-programmed tasks. The pace of the spread of these models continues at an unprecedented pace, as many platforms – such as ChatGPT, bard and Bert – have become popular among the public since 2023.

“Generative AI technologies can revolutionize healthcare provided that the developers, organizers and users of these technologies are committed to fully taking into account the risks associated with them,”said one of the experts in the field of science. “We need transparent information and policies to manage the design, development and use of AI models to achieve better health outcomes and address persistent disparities in healthcare.”

Possible benefits and risks

The guidelines highlight five main applications of AI models in the field of health, namely:

1. Diagnosis and clinical care: support for inquiries made by patients.
2. Patient-directed use: guiding patients about symptoms and treatment.
3. Clerical and administrative tasks: facilitate the documentation and summarization of patient visits.
4. Medical education: support medical and nursing training by simulating patient encounters.
5. Scientific research and drug development: accelerating the process of discovering new compounds.

Despite the benefits, there are risks associated with producing inaccurate or biased information that may influence health decisions. Models may receive training on unbalanced data, leading to biases related to race, gender or age. These models also face cybersecurity challenges, which may affect the privacy of patient information and the reliability of algorithms.

The guidelines emphasize the need to involve all stakeholders – governments, technology companies, healthcare providers, patients, and civil society – in the process of developing and regulating these technologies to ensure their comprehensive and effective governance.

Key recommendations

The guidelines include recommendations for governments that are responsible for setting the necessary standards for the development and use of AI models. Among these recommendations are:

1. Infrastructure investment: providing the necessary infrastructure, such as computing capabilities and public data sets.
2. Apply strict policies and regulations: ensure that the models meet ethical and human rights standards that affect human dignity and Privacy.
3. Appointing a regulatory body: responsible for evaluating and approving AI models intended for healthcare.
4. Conduct independent audits: carry out comprehensive impact audits and assessments, and publish their results to the public.

The guidelines also include recommendations for developers of artificial intelligence models, where they should:

1. Involve all interested parties: involve potential users, healthcare professionals and patients at all stages of development to ensure transparency.
2. Design models accurately and reliably: ensure that the models achieve clear goals that serve the health system and respect the rights of patients, with the ability to anticipate possible secondary outcomes.

Leave a Reply

Your email address will not be published. Required fields are marked *