This year has seen an explosion in generative AI applications, with leaders in every organization eager to explore how to leverage it, even viewing it as a crucial strategic move in their digital transformation. However, due to the immense power of generative AI, countries worldwide are enacting AI-related regulations, prompting leaders to consider not only the obvious productivity gains from technological advancements but also whether their use of this technology ensures trustworthiness and ethical compliance.
This is especially true in the field of healthcare, where when using generative AI, it is even more important to consider how to protect patient privacy and how to ensure the accuracy and reliability of the AI.
The core technology of generative AI is a powerful large language model, and how medical institutions manage the training process and practical application of such models is crucial. A large language model consists of three elements: algorithm, data, and computing power. Regarding the algorithm, there are currently open and legally commercially applicable options, such as LLaMA 2 and BLOOM, which allow users and even future regulatory agencies to fully evaluate them. Regarding the data, since institutions will use pre-trained models rich in online knowledge as a foundation, and then optimize the training with their own private data to achieve greater accuracy or better meet their needs, it is essential to ensure that privacy and legal compliance are both considered when using data.
In addition, generative AI requires input of real patient data during the utilization phase, so institutions must ensure that the large language models used are highly controllable and that there is no leakage or improper reuse as training data. Finally, there is computing power. Generative AI requires a lot of computing power in both the training and utilization phases, and institutions must also consider the reliability of computing power, such as whether to use the computing power of cloud service providers.
In June of this year, the European Parliament voted to pass a draft bill entitled "Artificial Intelligence Act," the first comprehensive AI regulation in the West. The United States has also released a policy blueprint for generative AI. The Executive Yuan is expected to propose the "Basic Law on Artificial Intelligence" in early September, and the AI 2.0 Action Plan, which began this year, also places greater emphasis on work related to trustworthy AI.
Undoubtedly, generative AI has rapidly changed the world, accelerating digital transformation across various fields and the widespread application of AI. We should not only be users of ChatGPT, but also seriously address the issue of trustworthy AI in industrial applications so that the whole society can benefit from this technology that has had the greatest impact in recent years.
Source: Financial News Issue 691