Insurers are already using AI to improve customer service, increase efficiency, provide greater insight into customers' needs and to prevent fraudulent transactions. Customers are embracing this innovation in insurance, as it responds to their needs and makes their interactions with insurers more convenient.
As noted by the Commission's high-level expert group on AI in its policy recommendations, the development and use of AI is already covered by a wide body of existing EU legislation, such as on fundamental rights, privacy and data protection, as well as product safety and liability. This is further complemented by national and sectoral regulatory frameworks.
To promote the uptake of AI and prevent innovative technologies from being stifled by premature regulation, the ethical use of AI should therefore be supported by, and reinforced through, voluntary and/or non-legislative instruments as far as possible. Voluntary certifications have traditionally proven to be an effective means of ensuring high and transparent standards (eg in the area of IT security).
Moreover, an approach that focuses mainly on voluntary instruments (eg industry-developed codes of conduct or guidelines) remains compatible with the option to introduce legislative instruments containing mandatory requirements for certain AI applications.
However, it is important to ensure that any EU legislative instrument that may be introduced is horizontal, proportionate and risk-based, and limited only to "high-risk" AI applications that are determined on the basis of clear criteria. Inclusion in the scope of such requirements of low-risk, common automation processes or applications that pose little or no risk to the rights of customers would hinder innovation and the uptake of new technologies, give rise to additional costs, and create a disproportionate burden in view of their low risk.