1,664
Views
0
CrossRef citations to date
0
Altmetric
Comment

Artificial intelligence and health literacy—proceed with caution

Article: 2263355 | Received 04 Sep 2023, Accepted 15 Sep 2023, Published online: 17 Oct 2023

Abstract

Improved access to AI platforms has prompted greater awareness of their potential application in public health and healthcare. Although the COVID-19 pandemic accelerated existing trends in the use of digital health technologies, our understanding of the potential of AI to assist with patient and public health communication is still limited. This paper examines both actual and potential use of AI platforms in improving health literacy through an examination of current scientific literature and an “interview” with ChatGPT on health literacy. Although AI platforms can answer health questions using natural, accessible language and have the potential to greatly expand access to information, current limitations stem from derivative content and the potential for inaccuracies and amplification of misinformation. Patients and consumers vary considerably in their access to digital technology; in their skills to discriminate the accuracy and reliability of information; and in their trust and responsiveness to what digital technologies have to offer. At this stage in the evolution of the AI platforms requires significant human, professional leadership and judgement. Those of us engaged in improving health literacy have an important role to play in influencing the future direction of AI in health communication, including and especially by engaging in research and development activities that build evidence of effectiveness and support the development of health literacy skills alongside the expansion of technologies.

Health literacy and digital technology

Health literacy describes the personal knowledge and skills which enable people to find, understand, appraise and use information and services in ways that promote and maintain health (Nutbeam & Muscat, Citation2021). These skills are significantly mediated by access to resources and the demands and complexities of the environment in which a person is seeking information (Nutbeam & Muscat, Citation2021). Understanding these two dimensions to health literacy lead us to actions that are intended to both strengthen people’s skills in finding and using trustworthy health information, alongside actions to reduce the demands and complexity of the health environment.

Over the past 20 years as digital technologies have evolved, the use of digital media for health communication has grown substantially. These technologies enable us to communicate directly with large numbers of people at relatively low cost. They offer unprecedented opportunities to target and personalise information and to engage people in interactive communication (Koh et al., Citation2021; Muscat et al., Citation2023). Digital technologies have also created new challenges—providing easy access to information and opinion that is inaccurate, sometimes deliberately misleading, and often driven by commercial motive (Swire-Thompson & Lazer, Citation2020).

As the use of digital media has grown the concepts of eHealth literacy and digital health literacy have evolved to describe the skills required to find and use health information from digital sources and apply the knowledge gained to addressing or solving a health problem (Norman & Skinner, Citation2006).Footnote1 Digital health literacy skills can be categorised as functional, interactive and critical (Nutbeam, Citation2000). Functional health literacy describes basic-level skills that are sufficient for individuals to obtain relevant health information (e.g. from the internet) and to apply that knowledge to a range of well-defined actions. Interactive health literacy describes more advanced literacy skills that enable people to actively extract health information and derive meaning from different sources, including a range digital media; to apply new information to changing circumstances; and to engage in interactions with others to extend the information available and make decisions. Individuals with these higher-level skills are better able to discriminate between different sources of information; to respond to health communication and education that is more interactive; and to adapt their responses to health information to reflect this deeper understanding. Critical health literacy describes the most advanced literacy skills that incrementally build on those described above to enable people to critically analyse information from a wide range of sources, and on a greater range of health determinants.

Digital health literacy skills are also significantly mediated by the context in which people seek health information, especially their experience with the technologies used. Improving digital health literacy requires us to focus both on the ways we can reduce the complexity of the information system and environment; and on helping people develop functional, communicative and critical skills to enable them to identify and engage productively with relevant, trustworthy sources of information on digital platforms. Digital health literacy skills can be developed in much the same way as more generic health literacy skills—through exposure to different communication methods and content and different digital media. The development of these skills is fundamentally dependant on inclusive and equitable access to quality education and life-long learning. These are skills developed over a lifetime, first and foremost through the school curriculum (World Health Organisation [WHO], Citation2016).

AI and health literacy

The launch of more publicly accessible AI platforms such as OpenAI’s ChatGPT and Google’s Bard have prompted much greater public and professional awareness of these rapidly evolving technologies and highlighted their potential application in many sectors, including public health and healthcare. Artificial intelligence is a widely and often inaccurately used term. While various definitions exist, at its simplest AI is a field of computer science that focuses on the development of technologies capable of performing tasks that typically require human intelligence without every step in the process having to be programmed explicitly by a human.Footnote2 Through the analysis of large datasets, machine learning models can recognize patterns, make predictions, and adapt their behaviour over time. These models can automatically create new content, with many machines “trained” to learn and improve from experience.

As the technology and its applications have evolved, these platforms can be used not only to find information but also to engage in interactive and incremental communications. Conversational AI is a specific application of AI that focuses on enabling computers to interact with humans through natural language. It involves technologies like chatbots and virtual assistants that can understand and respond to human language, allowing for more natural and interactive communication. Conversational AI often utilizes large language models as their core component. These models are trained on vast amounts of text data to understand human language, generate responses, and engage in natural conversations with users. In this way, large language models provide the foundation for more advanced and interactive conversational AI systems. Importantly, conversational AI applications, such as chatbots and virtual assistants, can have limitations placed on the data sources they use and the responses they provide to ensure accuracy, relevance, and quality. This ensures that the AI model is exposed to accurate and high-quality information during training.

The use of conversational agents such as chatbots has become widespread in online commerce and has been adopted by some government agencies and non-government organisations, including health organisations. The potential application of AI to public and patient communication has been discussed for some time (see for example, Green et al., Citation2013). The global COVID-19 pandemic accelerated existing trends in the use of digital technologies in public health and health care. This included, for example, the experimental use of AI chatbots to provide public information and support patient management (Chow et al., Citation2023); and the use of AI capabilities to make COVID information available in multiple languages (Yang, Citation2023). There are a small but growing number of examples of chatbots that have either been designed or adapted for use directly with patients, for example in the management of chronic conditions (Chaix et al., Citation2020); and cancer screening, prevention and management (Görtz et al., Citation2023; Wang et al., Citation2023). Where evaluations have been undertaken, the results to date have been mixed and hard to generalise. This includes highly varied human responses to chatbots, with for example, significant differences in levels of trust in the information provided that are often related to level of health literacy (Fan et al., Citation2021, Kang et al., Citation2023). Similar variations in the responsiveness of health professionals to the use of AI has also been observed (Palanica et al., Citation2019).

Whilst these experiences have identified the potential use of AI platforms in patient and community health communication and begun to explore the implications for digital health literacy, there remains only limited evidence of practical applications (Liu & Xiang, Citation2021). The experiences gained in the exceptional circumstances of the COVID pandemic are now being tested and examined more systematically. Our understanding of the potential of AI to assist with patient and public health communication is improving but is still limited. The vast majority of scientific publications are descriptive and most report on the use of text-based, AI-driven, and smartphone app-delivered conversational agents (Sallam, Citation2023; Tudor Car et al., Citation2020). These reviews of current applications consistently urge caution—concluding for example, that there is “urgent need for a robust evaluation of diverse health care conversational agents’ formats, focusing on their acceptability, safety, and effectiveness” (Tudor Car et al., Citation2020); raising concerns regarding the technology’s potential misuse; and advocating for “appropriate guidelines and regulations ….with the engagement of all stakeholders involved” to ensure the safe and responsible use of the technology (Sallam, Citation2023).

An interview on health literacy with CHatGPT

At the time of writing the use of AI to support health and patient communication could best be described as emergent, and at worst, somewhat over-hyped. In an attempt to get an informed understanding of the current state of evolution I “interviewed” ChatGPT (Version 3.5) to gain a first-hand experience of the type of information it would make available and in what form if asked a series of incremental questions on AI and health literacy.

Box 1 summarises CHatGPT-identified applications of AI to patient education and health literacy and provides a summary of the identified strengths and limitations of the responses provided. It is important to note that ChatGPT is a learning model that would provide different answers to the same question if regenerated or asked again at a later point in time, and would have responded differently had the questions been posed in a different way.

Box 1 Applications of AI to patient education and health literacy and reflections on CHatGPT.

Where to from here—how can we optimise the potential application of AI in health literacy?

Taken as a whole the responses from ChatGPT provide a useful snapshot of the current stage of development of this type of AI platform and its more immediate applications for those of us interested in improving health literacy. The responses illustrate the ability of AI driven platforms to find information from a wide variety of sources and organise it into well structured, coherent responses using natural language. This has obvious and immediate applications for those of us working on the development of written materials for use by the public referred to below. Here AI platforms can work as “assistants” to health professionals, in circumstances where the professional retains control over content and its communication.

At their current stage of evolution these AI platforms have immediate application in:

  • Improving the quality of written health communication: Platforms such as ChatGPT respond well when asked a specific task such as “explain a heart attack using language that could be understood by a 5th grade student.” It will draw upon the vast materials available to it to provide a comprehensive, easy to understand response in natural language. At this stage in the evolution of the technology further human intervention is required to check accuracy and fine-tune messaging, but the technology can provide an advanced starting point. This is fast, convenient, efficient and results in optimal value-add from human partners. With such digital technologies available to us there really is little excuse not to test and simplify the language used in all forms of written communication within our health care organisations, online and in public health communications in the wider community.

  • Improving the quality of written communication in multiple languages: These same technologies can also provide the same information in multiple languages for those of us working in diverse communities. Again, the response will require human intervention not only to check accuracy but also to reflect local cultural norms and values. Again, current technology is sufficiently advanced to make the most efficient and value-adding use of human interpreter time to ensure cultural sensitivity and relevance. Over time as AI platforms evolve through training, ChatGPT expects that it will be more skilled in communications that “understand cultural nuances.”

  • Improving public direct access to health information: AI supported conversational agents can improve direct public access to health information. Regardless of the confident response from ChatGPT, this application of AI is still at an early stage of development and needs to be approached as experimental. The ChatGPT responses indicate future potential for AI platforms to directly provide accurate and trustworthy health information in forms that are more accessible, personalised and adaptive to the needs and preferences of our patient and community populations. We are still some way from achieving this potential—in the accuracy of information generated, in access, confidence and trust of consumers in using the technology, and in the regulation of the operating environment.

  • Build health literacy skills and address the social determinants of health: Although the interview with ChatGPT identified the potential of AI to build communicative and critical health literacy skills through, for example, gamification and the provision of information about the social determinants of health, evidence to support this is currently lacking. As discussed further below, the development of these skills may be better achieved in parallel to AI, such as through educational settings.

Proceeding with caution

As health professionals one of our most important roles in health literacy is to reduce the demands and complexity of health information to enable access to trustworthy, easy to understand health information that supports patients and community members in taking actions to promote and protect their health. AI platforms have the potential to deliver this.

Alongside actions to reduce the demands and complexity of the environment in which they are seeking information, our goals in health literacy are also to strengthen people’s skills and capabilities to find and use trustworthy health information, As access to intelligent machines increases, we can contribute in both domains. In our direct work with patients, consumers and communities we can take actions that better equip people with the knowledge, skills and judgement required to optimise the potential benefits of these new technologies (and see through the hype)—to improve digital health literacy. Building digital health literacy in an AI era will involve helping people develop skills that enable understanding of the strengths and weaknesses of conversational agents and other AI platforms, including the risks of misinformation. This should start with young people in schools and by definition will require the more advanced, critical health literacy skills described above. These skills are best developed through more interactive communication methods and media, including and especially through human interaction.

We should also look for opportunities to more actively contribute to the development of foundational algorithms and the “training” of intelligent machines that will make accurate and trustworthy health information more accessible and reduce the risks of misinformation. Conversational AI systems can be trained on pre-selected and verified data sources from reliable and authoritative sources. This ensures that the AI model is exposed to accurate and high-quality information during training. In addition, where some data sources may be known for spreading misinformation or unreliable information access can be limited or prevented. Engaging actively with AI platforms and technologies will enable us to engage in the optimal development of more personalised forms of health communication through digital media that are adaptive to the needs and preferences of our patient and community populations.

This direct engagement with the technology should be set alongside active advocacy for effective regulation of the information environment and/or accreditation and quality standards for the provision of health information. In promoting the responsible use of AI we can focus attention on the ethical choices associated with how an organisations adopts and uses AI capabilities. Generally, this involves advocating for transparency in how an AI model works and the ability to explain why a specific decision in an AI model was made; fairness in ensuring that a specific group is not disadvantaged based on an AI model decision; and can involve attention to sustainability by ensuring that the development of AI models be done on an environmentally sustainable basis.

This may not be as daunting as it seems. The more reputable and established platforms are actively seeking feedback on this range of issues and publicly advocating for regulation. Several governments are developing policies and regulations to ensure the safe and responsible use of AI (see for example the Australian Government discussion paper—Australian Government Department of Industry & Science & Resources, Citation2023)

summarises the current position, offering a dynamic view of the different methods for improving health literacy, and highlighting potential actions as AI evolves.

Figure 1. Improving health literacy utilizing AI platforms and capabilities (Nutbeam & Muscat, Citation2023).

Figure 1. Improving health literacy utilizing AI platforms and capabilities (Nutbeam & Muscat, Citation2023).

In the two core aims highlight the need to develop personal skills and abilities (1) alongside action to reduce complexity and improve reliability and trust in the information environment (2). Four strategies are highlighted for improving health literacy. Two generic approaches to help develop personal skills and abilities are identified—improving access to accurate, trustworthy information across the life-course (3); and working directly with the public and patients to develop skills and confidence in applying information (4) in ways that promote and protect health. Two further strategies are identified to help reduce situational demands and complexities—reducing the complexity/improving the quality of communication with patients and the public (5); and regulating the information environment (6) in ways that make it easier to access accurate and trustworthy health information. It then highlights the immediate direct use of AI to provide assistance in improving the quality and reach of communications (7) in the ways described above. It also indicates the potential for developing personalised and interactive communications that are tailored to the needs and preferences of individuals and can build trust, skills and confidence that fit with the interactive and critical health literacy skills described earlier (8). Finally it signals the importance of engaging in actions to ensure proper regulation of the information environment, and more positive accreditation of trustworthy sources of information (9).

Conclusions

AI platforms such as ChatGPT can answer health questions using natural, accessible language, with the potential to greatly expand access to information to answer health-related questions, provide personalized health advice, and engage users in continuing support. At its best it can provide feedback that is highly targeted, tailored to individual preferences and adapted as circumstances change. However at the time of writing, this potential has not yet been substantially tested or realised. There are additional limitations related to the fact that answers are currently almost entirely derivative with the potential for inaccuracies and amplification of misinformation. Patients and consumers vary considerably in their access to digital technology; in their skills to discriminate the accuracy and reliability of information; and in their trust and responsiveness to what digital technologies have to offer. Given this, this stage in the evolution of the AI platforms requires significant human, professional leadership and judgement. Those of us engaged in improving health literacy have an important role to play in influencing the future direction of AI in health communication, including and especially by engaging in research and development activities that build evidence of effectiveness and support the development of health literacy skills alongside the expansion of technologies.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1 eHealth literacy has most often been used as a term to describe the use of digital media skills in health care, while digital health literacy has been used as a more generic term encompassing the broader skill set required to make optimal use of a range of digital resources. For convenience, this paper uses the more generic term digital health literacy.

2 Definitions developed with assistance from ChatGPT (accessed August 1, 2023) and the Glossary of AI Terms found here: https://www.expert.ai/glossary-of-ai-terms/? (accessed August 1, 2023).

References

  • Australian Government Department of Industry, Science and Resources. (2023). Safe and responsible AI in Australia: A discussion document. Australian Government, Canberra. https://consult.industry.gov.au/supporting-responsible-ai
  • Chaix, B., Guillemassé, A., Nectoux, P., Delamon, G., & Brouard, B. (2020). Vik: A chatbot to support patients with chronic diseases. Health, 12(07), 1–7. doi: 10.4236/health.2020.127058.
  • Chow, J. S. F., Blight, V., Brown, M., Glynn, V., Lane, B., Larkin, A., Marshall, S., Matthews, P., Rowles, M., & Warner, B. (2023). Curious thing, an artificial intelligence (AI)-based conversational agent for COVID-19 patient management. Australian Journal of Primary Health, 29(4), 312–318.
  • Fan, X., Chao, D., Zhang, Z., Wang, D., Li, X., & Tian, F. (2021). Utilization of self-diagnosis health chatbots in real-world settings: Case study. Journal of Medical Internet Research, 23(1), e19928. URL: https://www.jmir.org/2021/1/e19928
  • Görtz, M., Baumgärtner, K., Schmid, T., Muschko, M., Woessner, P., Gerlach, A., Byczkowski, M., Sültmann, H., Duensing, S., & Hohenfellner, M. (2023). An artificial intelligence-based chatbot for prostate cancer education: Design and patient evaluation study. Digital Health, 9, 20552076231173304.
  • Green, N., Rubinelli, S., Scott, D., & Visser, A. (2013). Health communication meets artificial intelligence. Patient Education and Counseling, 92(2), 139–141.
  • Kang, E., Chen, D.-R., & Chen, Y.-Y. (2023). Associations between literacy and attitudes toward artificial intelligence–assisted medical consultations: The mediating role of perceived distrust and efficiency of artificial intelligence. Computers in Human Behavior, 139, 107529.
  • Koh, A., Swanepoel, D. W., Ling, A., Ho, B. L., Tan, S. Y., & Lim, J. (2021). Digital health promotion: Promise and peril. Health Promotion International, 36(Supplement_1), i70–i80.
  • Liu, T., & Xiang, X. (2021). A framework of AI-based approaches to improving ehealth literacy and combating infodemic. Frontiers in Public Health, 9.
  • Muscat, D., Hinton, R., Nutbeam, D., Kenney, E., Kuruvilla, S., & Jakab, Z. (2023). Universal health information is essential for universal health coverage. Family Medicine and Community Health, 11(2), e002090.
  • Norman, C. D., & Skinner, H. A. (2006). eHealth literacy: Essential skills for consumer health in a networked world. Journal of Medical Internet Research, 8(2), e9.
  • Nutbeam, D. (2000). Health literacy as a public health goal: A challenge for contemporary health education and communication strategies into the 21st century. Health Promotion International, 15(3), 259–267.
  • Nutbeam, D., & Muscat, D. (2021). Health promotion glossary 2021. Health Promotion International, 36(6), 1811–1811.
  • Nutbeam, D., & Muscat, D. (2023). Health literacy in a nutshell. Sydney: McGraw-Hill.
  • Palanica, A., Flaschner, P., Thommandram, A., Li, M., & Fossat, Y. (2019). Physicians’ perceptions of chatbots in health care: Cross-sectional web-based survey. Journal of Medical Internet Research, 21(4), e12887.
  • Sallam, M. (2023). ChatGPT utility in healthcare education, research, and practice: Systematic review on the promising perspectives and valid concerns. Healthcare (Basel, Switzerland), 11(6), 887.
  • Swire-Thompson, B., & Lazer, D. (2020). Public health and online misinformation: Challenges and recommendations. Annual Review of Public Health, 41(1), 433–451.
  • Tudor Car, L., Dhinagaran, D. A., Kyaw, B. M., Kowatsch, T., Joty, S., Theng, Y., & Atun, R. (2020). Conversational agents in health care: Scoping review and conceptual analysis. Journal of Medical Internet Research, 22(8), e17158.
  • Wang, A., Qian, Z., Briggs, L., Cole, A. P., Reis, L. O., & Trinh, Q. (2023). The use of chatbots in oncological care: A narrative review. International Journal of General Medicine, 16, 1591–1602.
  • World Health Organisation (WHO). (2016). Shanghai declaration on promoting health in the 2030 agenda for sustainable development. WHO. https://www.who.int/publications/i/item/WHO-NMH-PND-17.5
  • Yang, L. W. Y., Ng, W. Y., Lei, X., Tan, S. C. Y., Wang, Z., Yan, M., Pargi, M. K., Zhang, X., Lim, J. S., Gunasekeran, D. V., Tan, F. C. P., Lee, C. E., Yeo, K. K., Tan, H. K., Ho, H. S. S., Tan, B. W. B., Wong, T. Y., Kwek, K. Y. C., Goh, R. S. M., Liu, Y., & Ting, D. S. W. (2023). Development and testing of a multi-lingual natural language processing-based deep learning system in 10 languages for COVID-19 pandemic crisis: A multi-center study. Frontiers in Public Health, 11, 1063466.