Large language models (LLMs) such as ChatGPT, Gemini, and Claude, are trained on massive datasets using advanced deep learning to interpret and generate human-like responses across a wide range of end-user interactions. They respond to basic natural language queries with relevant answers, reducing the time needed to turn questions into knowledge. Although using LLMs may require learning new techniques to develop prompts that produce a desired response, such tools are changing the administrative dynamics of healthcare.
Prompt engineering remains an integral part of the LLM process that can improve the accuracy of responses based on specific, contextual, and naturally framed queries. LLMs are powerful for taking clear language inquiries using a GPT model and receiving feedback in near real-time. But they still require human oversight of hallucination, ambiguous prompts, and sensitivity to phrasing.
LLMs have shown practical benefits, with healthcare organizations increasingly recognizing their value in automating repetitive tasks, accelerating documentation workflows, and supporting real-time administrative decision-making. Their role is not to replace healthcare professionals but to support and extend their capabilities.
LLMs' Role in Administrative Processes
A November research article in Advances in Health Information Science and Practice (AHISP) found that many surveyed health information (HI) professionals struggled with artificial intelligence (AI) models and the development and use of AI products, regardless of having frequent experiences with everyday AI tools. According to a 2023 AHIMA workforce survey, 66 percent of HI professionals reported ongoing staffing shortages, while 75 percent emphasized the need for workforce upskilling. In this climate, LLMs provide timely relief by reducing manual workloads and empowering professionals to work more effectively. Using LLMs to perform administrative tasks such as summarizing documents and processing information can ultimately improve operational efficiency (Nagarajan et al, 2024).
Early implementations of LLMs in clinical environments show strong potential to alleviate provider burden. For example, ambient scribe technology and GPT-based documentation tools are being used to reduce time spent on manual note entry. According to an article in Healthcare IT Today, “LLMs integrated into ambient scribe tools are streamlining clinical documentation, minimizing physician burnout, and helping refocus time on patient care rather than paperwork.” These efficiencies are not only technical wins but also workforce solutions for ongoing staffing challenges.
Practical Applications for Task Simplification
In addition to enhancing productivity, recent studies have explored how patients perceive AI-generated responses in healthcare settings. A 2024 study published in JAMA Network Open found that many patients preferred AI-generated responses to health questions over those written by physicians—particularly in terms of quality and empathy. However, patient satisfaction declined slightly when patients were informed that the responses had been generated by AI (Zhou et al., 2024). This highlights an important nuance: while AI-generated content may be viewed as competent and compassionate, transparency about its authorship plays a key role in maintaining trust.
To fully realize the value of LLMs in healthcare administration, HI professionals must understand the importance of effective interaction—especially through a process known as prompt engineering, which involves crafting clear, intentional instructions that guide the model toward a desired output. Clearly articulating the task at hand enhances the relevance of the response and tailoring the prompt's complexity can improve the model’s adaptability and reduce ambiguity.
Vague prompts often result in generic or incomplete responses, so an iterative approach—refining, testing, and adjusting prompts—is essential. Ongoing monitoring ensures continued accuracy and maintains the critical human oversight needed for responsible LLM use. Here’s an example of how initial and refined prompts differ:
- Initial Prompt: Draft an email to my team requesting them to share project updates.
- Refined Prompt: Draft a professionally worded email to my team requesting that they share project updates. In their response, ask them to use the following format: project name/code, status update, hours needed listed by month, and status of any risks/issues/action items.
Integration Strategies for LLMs
Administrative Workflow Assistance
LLM use can simplify administrative duties, offering additional context and critical analysis to translate and upskill employees' understanding of such tasks and reducing the time needed to complete them. Use cases presented at the AHIMA AI Summit in June included demonstrations of automated policy drafting, email generation, and performance review creation—all of which require significant time investment. When paired with structured prompts, LLMs were shown to reduce task completion time by over 60 percent in some settings.
Data Analysis
Depending on the content being analyzed, different types of files can be ingested into an LLM, ranging from images such as JPEGs to invoke image-related queries to documents such as PDFs, TXT, or DOCX (Microsoft Word). This can be helpful for text extraction and document or content review to code files such as structured query language (SQL) or data files such as comma-separated values (CSV), XLSX (Excel).
Using data analysis and visualization tools such as Excel, SPSS, and Tableau can be advantageous for quickly compiling charts and pivot tables and briefly conducting business analysis from descriptive statistics. LLMs can help quickly identify different trends in data that may otherwise go unnoticed and provide feedback for potential next steps to ensure strategic alignment from extraction through analysis.
Text Analysis and Categorization
An area where LLM usage can be beneficial is text analysis and categorization. Consider how long it would take an individual to read and categorize over 300 clinical notes. LLMs can classify clinical notes into specific features such as vitals, symptoms, and assessment plan. Excel and CSV files can handle structured and unstructured data. Data files are structured in a format with appropriate headers for the LLM to ingest the prompt being assessed, while underlying data (e.g., provider notes) may be unstructured. Similar to human-conducted analysis, it’s imperative to have properly formatted headers with labeled columns to further assist readability.
Depending on the LLM being used, different outputs and recommendations may be proffered, which can benefit the end-user. Models may provide direct feedback within the interface being used. Or direct modifications to the source file may be made by the LLM, allowing the end-user to download the adjusted file with the embedded outputs. A note on protected health information (PHI) and proprietary information: it is vital to follow legal and organizational policies when using LLMs to remove any identifiable information.
Creating Action Plans
To ensure responsible adoption of LLMs, it is important to build governance frameworks that include:
- Vetting use cases with IT, security, and compliance teams
- Avoiding PHI and proprietary business data exposure
- Establishing training protocols to upskill users in prompt engineering
- Creating standardized workflows where LLMs augment—not replace—existing practices
- Using AHIMA’s AI guide to support health information management (HIM) involvement in AI initiatives
Research continues to show the value of LLMs in clinical and administrative settings. A 2023 study published in JAMA Internal Medicine found that ChatGPT responses were preferred over those written by physicians in 78.6 percent of cases, particularly for their perceived empathy and quality. This finding underscores the growing potential for LLMs to enhance communication and streamline workflows—so long as their use remains grounded in appropriate human oversight.
As with any emerging technology, LLMs should be viewed as a complement—not a replacement—for skilled professionals. For HI experts, these tools offer an opportunity to amplify impact and improve efficiency. However, realizing their full value requires continuous workforce upskilling, thoughtful integration, and a clear understanding of when and how to deploy them.
The implications extend well beyond administrative use, with promising potential for financial savings, clinical support, and operational scalability. Ultimately, automation is assistive—not authoritative. The key to successful implementation lies in strategic application, robust training, and ensuring that “an adult is in the room” to provide the necessary guardrails, governance, and intent behind every interaction.
Alex Gelvezon, DHA, CPHQ, LSSGB, is a health informatics analyst at UCLA Health, and Megan Pruente, MPH, RHIA, FAHIMA, is IT director of person identity management at Banner Health. They gave a presentation on this topic at AHIMA25 in October.
By Alex Gelvezon, DHA, CPHQ, LSSGB, and Megan Pruente, MPH, RHIA, FAHIMA,