Skip to Main Content

Physical Therapy - Dallas

Physical therapy research guide for TWU Dallas students, faculty, and staff.

Generative AI Tools for Academia

Generative Artificial Intelligence (AI): Definition

Generative AI refers to a class of artificial intelligence systems designed to create new content—such as text, images, audio, video, data, or code—based on patterns learned from existing data.
Unlike traditional AI models that classify or predict outcomes, generative models produce original outputs that mimic human creativity and language.

Generative AI operates using large language models (LLMs) or multimodal models, which are trained on vast datasets to recognize relationships between words, images, and concepts. These systems can assist students, educators, and researchers by generating drafts, summaries, explanations, or data visualizations—but must always be used critically, with attention to accuracy, bias, transparency, and ethical use.

Example: ChatGPT and other LLMs can draft literature summaries, generate search strategies, or assist with citation formatting—yet they can also produce inaccurate or fabricated content (“AI hallucinations”) if unchecked.

Recommended Generative AI Tools for Academic Use

1. ChatGPT (OpenAI)

  • Description: ChatGPT is a conversational AI developed by OpenAI that generates text, explanations, research ideas, and summaries. It can assist with editing, brainstorming, and writing support.

  • Access: ChatGPT Landing Page (OpenAI)

  • Use: Writing support, idea generation, summaries, search string development, code and data explanations.

  • Best For: Faculty, librarians, and students in writing-intensive or research disciplines.

  • Caution: Always verify accuracy and check sources; cite AI assistance transparently. Fact-check citations carefully, as ChatGPT may generate fabricated (“hallucinated”) references.

  • Versions:

    • Free: ChatGPT-3.5 (basic access, limited functionality).

    • Paid: ChatGPT Plus (GPT-4) with enhanced reasoning, image generation (via DALL-E), and file uploads.

  • Academic Use: Excellent for drafting outlines, brainstorming research questions, summarizing articles, or explaining complex concepts.


2. Claude (Anthropic)

  • Description: Claude is an AI chatbot developed by Anthropic, designed for safe, transparent, and ethical interaction. It handles larger text files (e.g., PDFs, research reports) for summarization and synthesis.

  • Access: Claude AI Information & Pricing

    • Free and paid versions are available; the Pro plan supports larger context windows and faster response times.

  • Academic Use: Ideal for summarizing long documents, synthesizing literature, or supporting scoping and systematic reviews.

  • Unique Strength: Uses “Constitutional AI” to promote fairness and ethical reasoning and reduce bias.


3. Copilot (Microsoft)

  • Description: Microsoft Copilot integrates OpenAI’s GPT models into Microsoft 365 (Word, Excel, PowerPoint, Outlook) for real-time writing and data support.

  • Access: Microsoft Copilot Information Page

    • Included in enterprise or education Microsoft 365 subscriptions. Accessible through the Edge browser or Windows 11 sidebar.

  • Academic Use: Enhances writing, summarization, and data visualization across Microsoft apps—ideal for administrative tasks, student support, and faculty material creation.

  • Institutional Note: Recommended for use under TWU’s institutional privacy and security settings.


4. Gemini (Google)

  • Description: Gemini, formerly Google Bard, is Google’s large language model integrated across Google Workspace (Docs, Sheets, Gmail, and Drive).

  • Access: Google Gemini Information Page

    • Free basic access; premium Gemini Advanced (based on Gemini 1.5 Pro) available via Google One AI Premium Plan.

  • Academic Use: Supports writing, summarization, and data organization within Google Workspace.

  • Unique Feature: Direct integration with Google Search and Drive for up-to-date information retrieval.


5. GrammarlyGO / QuillBot AI

  • Description: AI-powered writing enhancement tools for grammar checking, paraphrasing, and tone refinement.

  • Access:

  • Best For: Writing centers, student support, and academic communication improvement.

  • Caution: Encourage ethical use—acknowledge AI assistance when it substantially alters writing or style.

  • Academic Use: Helpful for improving readability, clarity, and consistency in academic writing.


6. Twine / Canva Magic Write / Adobe Firefly

  • Description: Creative and design-oriented AI tools that generate text, images, and layouts from user prompts.

  • Access:

  • Use: Creative and educational design support using text-to-image generation and content visualization.

  • Best For: Educational outreach, student projects, library marketing, and visual communication assignments.

  • Caution: Verify image originality, cite AI-generated visuals appropriately, and ensure accessibility (e.g., add alt text).

How to Apply the TRUST Test

 

T—Transparency

Ask:
  • Does the AI disclose who created it, its purpose, and the source of its funding?
  • Is there documentation (such as a model card or about page) that describes how it works and its limitations?
  • Are you informed when the AI’s responses might be uncertain or incomplete?
Why it matters:

Transparency builds user confidence. Without clear visibility into a system’s design, users cannot meaningfully evaluate its credibility. A trustworthy AI should openly communicate its development origins, methods, and constraints, similar to how academic sources cite their methodologies.

R—Reliability
Ask:
  • Are the AI’s responses consistent when the same question is asked multiple times?
  • Are its outputs replicable and supported by evidence?
  • Does the system handle factual queries without contradictions?
Why it matters:
Reliability measures the AI’s ability to produce stable and accurate results over time.
An unreliable system can lead to misinformation or “hallucinations.”
Users should cross-check outputs against authoritative sources (e.g., PubMed, government data, or peer-reviewed journals) to confirm accuracy.

U—Understanding
Ask:
  • Do you understand how the AI generates responses?
  • Can the AI explain or summarize its reasoning process in plain language?
  • Does the user recognize that AI outputs are probabilistic, not human judgments?
Why it matters:
Understanding bridges the gap between human and machine reasoning.
Users who understand how AI works are less likely to misinterpret its outputs. This promotes AI literacy—knowing what AI can and cannot do—and empowers critical evaluation rather than blind trust.

S—Source Accuracy
Ask:
  • Does the AI cite reputable, verifiable sources for its information?
  • Are the references traceable (e.g., to journals, government data, or recognized organizations)?
  • Can the AI provide links, DOIs, or publication details for fact verification?
Why it matters:
Source accuracy ensures the content is grounded in credible evidence.
AI-generated text can sound authoritative but may lack a factual basis.
Verifiable citations allow the user to confirm whether the output aligns with scholarly or official data.

T—Training Data
Ask:
  • Is it clear what data the AI was trained on (e.g., open web, academic databases, licensed materials)?
  • Does the dataset include diverse perspectives and recent updates?
  • Were ethical and privacy standards upheld during data collection?
Why it matters:
The quality of an AI’s training data determines the integrity, inclusivity, and fairness of its results.
Limited or biased datasets can reinforce misinformation or systemic bias.
Ethical AI systems disclose general details about their data sources and update cycles to ensure relevance and inclusivity.

Why the TRUST Test Works
 
  • Simple & Memorable: Like the CRAAP Test, it’s easy for students and professionals to recall and apply.
  • Bridges Human & Machine Evaluation: Connects traditional information literacy principles with modern AI ethics.
  • Focuses on Accountability: Encourages users to expect explainability, traceability, and ethical transparency from AI systems.
  • Empowers Informed Use: Helps librarians, educators, and researchers guide others in evaluating AI responsibly.

Created by Flora Beth Ligeti, Health Sciences Librarian, Texas Woman’s University (Dallas Campus). Portions of the TRUST Test framework and its explanation were developed with the assistance of ChatGPT (OpenAI, 2025), under the author's direction, to model ethical and transparent AI use in educational content creation. APA Citation: OpenAI. (2025). ChatGPT [Large language model]. https://chat.openai.com

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

"The Trust Test" (SIFT or general evaluation)

"The Trust Test" is a less standardized term that often refers to the SIFT method in online information literacy or a general framework for evaluating trustworthiness. SIFT is a quick, proactive method involving cross-verification across multiple sources

"The Trust Test" (SIFT or general evaluation) 

It describes the goal of evaluating information, sometimes using the SIFT method as a modern approach to assessing online trustworthiness: 

  • Stop: Pause and reflect before engaging with or sharing the information.
  • Investigate the Source: Determine the source's credibility and expertise by seeking information about it from outside sources.
  • Find Better Coverage: Look for other reliable sources to see if they corroborate the information.
  • Trace Claims to the Original Context: Follow claims, quotes, or media back to their source or context.

 

Comparison of Methodologies

Feature 

CRAAP Test

"The Trust Test" (SIFT Method)

Approach

Checklist of criteria applied to the source itself ("vertical reading").

A sequence of proactive moves involving external investigation and cross-verification ("lateral reading").

Focus

Assesses intrinsic qualities of the source (date, author's credentials listed on site, etc.).

Focuses on determining the reputation and context of the source from outside the source itself.

Vulnerability

It can be susceptible to sophisticated disinformation campaigns that create professional-looking but biased websites.

More effective at navigating online misinformation by encouraging users to leave the page and verify elsewhere.

Application

Widely used in academic settings, particularly with traditional, often peer-reviewed, sources like journal articles and books.

Designed explicitly for evaluating complex and often manipulated information found on the internet and social media.

How to Use the Checklist

 

Criterion
CRAAP Test (Information Sources)
Checklist Questions
TRUST Test (AI Tools & Outputs)
Checklist Questions
C / T
Currency / Transparency
▢ When was the information published or updated?
▢ Is it current enough for your topic or field?
▢ Are links, data, and citations up to date?
▢ Does the AI disclose who created it, its purpose, and its funding?
▢ Is there documentation (e.g., a model card) describing how it works and its limitations?
▢ Does it indicate uncertainty or confidence levels in outputs?
 
R / R
Relevance/Reliability
▢ Does the content directly relate to your topic or research question?
▢ Is the intended audience appropriate (academic, professional, or general)?
▢ Is it too basic or too technical?
▢ Are the AI’s responses consistent and replicable?
▢ Are outputs supported by verifiable or cross-checked evidence?
▢ Does the AI produce stable factual results over time?
 
A / U
Authority / Understanding
▢ Who is the author, publisher, or sponsoring organization?
▢ What are their credentials or affiliations?
▢ Are they recognized experts in the field?
▢ Do you understand how the AI generates its results?
▢ Can the AI explain its reasoning in plain language?
▢ Do you recognize that its responses are probabilistic, not human reasoning?
 
A / S
Accuracy / Source Accuracy
▢ Is the information supported by evidence, citations, or references?
▢ Has it been reviewed or peer-reviewed?
▢ Is it free of errors, bias, or misinformation?
▢ Does the AI provide citations to credible, verifiable sources?
▢ Can you trace claims back to authoritative evidence?
▢ Are references aligned with academic or professional standards?
 
P / T
Purpose / Training Data
▢ Why was the information created (to inform, teach, sell, persuade)?
▢ Is it objective, or does it show bias or conflict of interest?
▢ Is the purpose clearly stated?
▢ Does the AI disclose what data it was trained on and when it was last updated?
▢ Is the dataset diverse, recent, and ethically sourced?
▢ Are potential biases acknowledged and mitigated?
 

 

 

In essence, the CRAAP test is a set of questions to ask about a source. In contrast, the SIFT method (a common interpretation of a "trust test") is a series of investigative actions designed to determine the credibility of a source in a complex information environment. Both methods aim to enhance information literacy and prevent the use of unreliable information in research.

  1. Apply CRAAP when evaluating articles, websites, databases, and other human-created content.
  2. Apply TRUST when evaluating AI systems (like ChatGPT, citation generators, or summarizers) and their outputs.
  3. Combine both when using AI to locate, summarize, or interpret scholarly information—ensuring both the source and the system meet credibility standards.
  4. Rate each criterion on a 1–5 scale (1 = poor, 5 = excellent) or mark ✓/✗ for quick classroom or assignment use.

 

 

Key Differences

Feature  CRAAP Test "TRUST Test" (General Concept)
Nature A specific, widely-adopted acronym and a structured, checklist-based method. A general concept or judgment, not a formalized, universal method.
Focus Evaluates a single source based on specific criteria (Currency, Relevance, Authority, Accuracy, Purpose). Focuses on the overarching goal of determining a source's reliability.
Methodology Involves a step-by-step review of the source's characteristics ("vertical reading"). Often involves "lateral reading" (leaving a site to see what others say about it) to verify claims and authority, which is typical of more modern methods like SIFT.
Primary Limitation Can be vulnerable to sophisticated disinformation campaigns that create seemingly credible surface features. Lacks a structured, consistent framework for application.

 

 

Limitations: 

The CRAAP test, while useful, can be vulnerable to sophisticated disinformation because it primarily relies on examining the source in isolation (vertical reading). Modern evaluation techniques like the SIFT method (Stop, Investigate the source, Find better coverage, Trace claims to original context), which involve "lateral reading" (leaving the site to see what other sources say about it), are often recommended as an enhancement or alternative to the CRAAP test for online sources. 

In summary, the CRAAP test offers a standardized, multi-faceted approach to critically analyze a source's reliability across several criteria, whereas "The Trust Test" generally describes the fundamental goal of this process—determining if information is worthy of one's trust. The CRAAP method provides the structured tools to make that determination more objectively. 

 

Key Improvements of Lateral Reading

Feature  CRAAP Test (Vertical Reading) Lateral Reading (Fact-Checker Method)
Method Stays within the source to assess credibility indicators like an "About Us" page or a list of citations. Leaves the original site to search the open web for external information about the source and its claims.
Effectiveness Can be misled by superficial or well-crafted disinformation (e.g., a biased site with a professional appearance and a convincing "About Us" page). More effectively determines a source's true credibility, intent, and biases by cross-referencing with multiple, potentially more reliable, sources (like established news outlets or fact-checking sites).
Scope Assesses the source in isolation, which may not provide the full context of its biases or reputation. Provides a broader context by synthesizing information from diverse sources, allowing for a more complete picture of the source's trustworthiness.
Focus Focuses on a rigid set of criteria (Currency, Relevance, Authority, Accuracy, Purpose) that can be gamed by sophisticated propaganda. Focuses on three core questions: Who's behind the information? What's the evidence? What do other sources say? These questions are designed to quickly get to the core of a source's reliability.

While the CRAAP test provides a useful foundational framework, especially in traditional print contexts, lateral reading is a more robust, dynamic strategy for the modern digital landscape. By demanding external verification, lateral reading helps researchers avoid the pitfalls of self-presentation and deliberate misinformation, leading to a more thorough and reliable evaluation of online sources

 

TRUST in AI (articles)

Chen, F., Zhou, J., Holzinger, A., Fleischmann, K. R., & Stumpf, S. (2023). Artificial intelligence ethics and trust: From principles to practice. IEEE Intelligent Systems38(6), 5-8.

Choung, H., David, P., & Ross, A. (2023). Trust and ethics in AI. Ai & Society38(2), 733-745.

Duenser, A., & Douglas, D. M. (2023). Whom to trust, how and why: Untangling artificial intelligence ethics principles, trustworthiness, and trust. IEEE Intelligent Systems38(6), 19-26.

Durán, J. M., & Pozzi, G. (2025). Trust and Trustworthiness in AI. Philosophy & Technology38(1), 16.

Gillis, R., Laux, J., & Mittelstadt, B. (2024). Trust and trustworthiness in artificial intelligence. In Handbook on Public Policy and Artificial Intelligence (pp. 181-193). Edward Elgar Publishing.

Reinhardt, K. (2023). Trust and trustworthiness in AI ethics. AI and Ethics3(3), 735-744.

Ryan, M. (2020). In AI we trust: ethics, artificial intelligence, and reliability. Science and Engineering Ethics26(5), 2749-2767.

Salloum, S. A. (2024). Trustworthiness of the AI. In Artificial intelligence in education: The power and dangers of ChatGPT in the classroom (pp. 643-650). Cham: Springer Nature Switzerland.

Salloum, S. A. (2024). Trustworthiness of the AI. In Artificial intelligence in education: The power and dangers of ChatGPT in the classroom (pp. 643-650). Cham: Springer Nature Switzerland.

AI Tools by Academic Application Area

 

AI Tools for Teaching

These AI platforms support educators in developing lesson plans, differentiating materials, designing assessments, and managing classroom activities.

  • Curipod: Create interactive lessons and quizzes from prompts.

  • MagicSchool.ai: Generates customized lesson plans, rubrics, and feedback.

  • LessonPlans.ai: Automates lesson structure and activity suggestions by grade or topic.

  • Teachology.ai: Assists educators with differentiated instruction and learning strategies.


AI Tools for Presentations

These tools automate slide creation, generate visuals, and enhance communication through AI-generated images and video.

  • Beautiful.ai: AI-powered presentation tool that designs visually consistent slides.

  • Gamma.app: Creates polished decks and visual narratives from text prompts.

  • Tome.app: Combines text, video, and imagery for multimedia storytelling.

  • Pika Labs: Generates short videos and animations from text prompts for presentations or educational use.


AI Tools for Accessibility

AI tools that improve inclusion and access through speech, text, and assistive capabilities.

  • Otter.ai: Real-time transcription and captioning for lectures and meetings.

  • Microsoft Immersive Reader: Supports students with reading and comprehension through adaptive text and voice tools.

  • NaturalReader: Converts text to lifelike speech for accessible reading.

  • Speechify: Reads text aloud from documents or web pages, enhancing comprehension.


AI Tools for Learning

AI applications that assist with reading comprehension, vocabulary building, and concept mapping.

  • ExplainPaper: Simplifies and explains complex academic texts.

  • MindMeister: Creates interactive mind maps and concept diagrams.

  • Quizlet AI: Generates adaptive quizzes and flashcards for active learning.

Scientific and Academic Prompting for Health Sciences Students in Clinical Settings

 

Scientific and Academic Prompting for Health Sciences Students in Clinical Settings.

Scientific and Academic Prompting for Health Sciences Students in Clinical Settings

This section is tailored for Physical Therapy (PT), Speech-Language Pathology (SLP), Nursing, and other Health Sciences students learning to use AI tools (like ChatGPT, Claude, or Gemini) for clinical reasoning, documentation, literature synthesis, and patient education—while maintaining ethical and evidence-based standards.


🩺 Purpose of Scientific Prompting in Clinical Contexts

Scientific or academic prompting teaches health sciences students how to communicate with AI tools using structured, evidence-informed language to:

  • Analyze and synthesize research literature.

  • Draft or refine clinical documentation.

  • Explore hypothetical case scenarios safely (without patient data).

  • Improve critical thinking and professional communication.

By designing precise, ethically guided prompts, students learn to bridge classroom knowledge with clinical decision-making while reinforcing the principles of evidence-based practice (EBP).


Core Elements of an Effective Clinical Prompt

Prompt Element Description Example
Clinical Context Identify the patient scenario or condition (age, diagnosis, setting). “A 72-year-old female with mild Alzheimer’s disease undergoing home-based physical therapy…”
Task or Goal Define the intended output (summary, treatment plan, patient script, literature overview). “Summarize best practices for improving balance and fall prevention.”
Evidence Scope Request evidence or references from peer-reviewed journals, recent guidelines, or meta-analyses. “Summarize recent (2020–2025) peer-reviewed evidence.”
Format & Depth Specify the type of response you want — e.g., paragraph, bullet list, patient-friendly language, or APA format. “Provide in paragraph form with APA 7th edition citations.”
Ethical Limitation Indicate you’re not requesting real patient data. “Use only published research—no personal or confidential data.”

 

The CLINICAL Model: A Prompt Framework for Health Sciences Education

The CLINICAL Model: A Prompt Framework for Health Sciences Education

CLINICAL Framework Breakdown & Acronym Table

 

Letter Principle Expanded Description  
C Contextualize Clearly describe the clinical or academic scenario to anchor the AI’s response. Include key details such as the population, setting, and condition. Example: “A 65-year-old woman with post-stroke hemiparesis in outpatient rehabilitation.”  
L Link Connect your prompt to a specific learning outcome or clinical objective. This ensures relevance and focus. Example: “Explain how proprioceptive training improves balance control in older adults.”  
I Include Specify what kind of evidence or source type you want included in the response—e.g., systematic reviews, guidelines, or randomized controlled trials (RCTs). Example: “Summarize findings from systematic reviews published between 2020 and 2025.”  
N Narrow Define the scope or limits of your question to make the AI’s answer precise and manageable. You can narrow your search by population, timeframe, intervention, or outcome. Example: “Limit to community-dwelling adults aged 60 and above.”  
I Integrate Request a synthesis of evidence or a comparison across studies. Ask the AI to show how findings relate or contrast. Example: “Compare exercise-based and pharmacologic interventions for chronic low back pain.”  
C Clarify State the desired output format (e.g., summary paragraph, bullet list, APA citation, or plain language). Example: “Provide a 150-word academic summary in APA 7th edition style.”  
A Acknowledge Encourage ethical reflection by reminding users (and the AI) to note limitations, uncertainty, and bias. Example: “Identify any gaps in the research or limitations of current evidence.”  
L Leverage Use the AI’s strengths for brainstorming, drafting, or refining—then build upon it with human judgment. Example: “Generate discussion questions for a physical therapy ethics seminar based on this topic.”  

 

 

Purpose and Application

The CLINICAL Model empowers students and educators to:

  • Structure prompts for clarity, depth, and precision.

  • Promote evidence-based practice and avoid superficial AI outputs.

  • Develop ethical awareness when engaging with generative AI in healthcare contexts.

  • Align AI use with professional and academic standards in research, teaching, and clinical education.

Educational Integration

  • Library Instruction: Use CLINICAL to teach how to ask purposeful, research-based questions.

  • Health Sciences Courses: Integrate into evidence-based practice modules or clinical reasoning labs.

  • Faculty Development: Support educators in designing AI-integrated learning activities and assessments.

Ethical Reminder

AI tools are valuable companions in learning and scholarship, but not replacements for professional expertise or human reasoning.
Always:

  • Verify generated information against peer-reviewed sources (PubMed, CINAHL, Scopus).

  • Disclose AI use in academic submissions.

  • Avoid using identifiable patient data or confidential records.

 

 

Ligeti, F. B. M., & ChatGPT (OpenAI). (2025). The CLINICAL Model: A Prompt Framework for Health Sciences Education. Texas Woman’s University Libraries, Dallas Campus & with conceptual collaboration by ChatGPT (OpenAI, 2025).

APA Citation: OpenAI. (2025). ChatGPT [Large language model]. https://chat.openai.com

AI Tools for Detection, Research, Writing, Prompting, Resources, Websites, & Evaluation

AI Tools for Audio, Image, & Video Generation

 

AI Tools for Audio, Image, and Video Generation

This section integrates seamlessly with your existing LibGuide on Recommended Generative AI Tools for Academic Use and follows your TWU library format and tone — clear, educational, and ethically grounded.
Each tool includes an academic description, access guidance, educational applications, and ethical considerations.


🎨 AI Tools for Audio, Image, and Video Generation

Generative AI tools for multimedia creation enable users to produce original images, sound, and video from text prompts.
These tools are valuable for educational presentations, instructional design, library outreach, and creative student projects.
Faculty and students are encouraged to use them ethically, ensuring copyright compliance and clear attribution for AI-generated content.


DALL·E (OpenAI)

  • Description: DALL·E is a generative image model developed by OpenAI that transforms written prompts into original images. It is now integrated directly into ChatGPT (Plus and Enterprise versions), allowing users to create and edit images using natural language instructions.

  • Access: Through ChatGPT (OpenAI) — included in GPT-4 (Plus and Enterprise).

  • Features:

    • Text-to-image creation and image editing (“inpainting”).

    • Style customization (photorealistic, illustration, or conceptual art).

    • Integration with ChatGPT for prompt-based visual storytelling.

  • Academic Use:

    • Create custom visuals for instructional materials, slides, posters, or research graphics.

    • Support digital literacy and creativity workshops in education, communication, or art.

  • Caution:

    • Verify originality; avoid generating identifiable likenesses or copyrighted content.

    • Attribute AI-generated imagery clearly (e.g., “Image generated using DALL·E, OpenAI”).


Midjourney

  • Description: Midjourney is an advanced AI art generator that produces high-quality, stylized images from text prompts. It operates via the Discord platform and specializes in artistic, imaginative, or design-driven visuals.

  • Access: Midjourney Documentation — includes a Quick Start guide, sample prompts, and subscription options.

    • Requires a paid subscription (plans start monthly, depending on usage).

  • Features:

    • Prompt-based image generation emphasizing composition, lighting, and texture.

    • Style blending, upscaling, and iterative refinement tools.

    • Community-driven gallery for collaborative learning.

  • Academic Use:

    • Excellent for digital art, design, or media studies coursework.

    • Supports library and classroom outreach visuals, concept visualization, and poster design.

  • Caution:

    • Midjourney images are hosted publicly on Discord; maintain FERPA and privacy compliance.

    • Ensure accessibility (add alt text) and proper credit (e.g., “Image generated using Midjourney AI”).


Stability AI

  • Description: Stability AI develops open-source generative models for image, video, and audio creation, including Stable Diffusion, Stable Video, and Stable Audio.

  • Access: Stability AI Models Page

    • Offers both free and paid versions.

    • Models can be run locally for privacy or accessed via Stable Assistant, a web-based interface.

  • Features:

    • Stable Diffusion: Generates images from text prompts.

    • Stable Video: Converts still images or prompts into motion clips.

    • Stable Audio: Composes royalty-free background music or soundscapes.

    • Open-Source Framework: Transparency in data and code promotes reproducibility and academic study.

  • Academic Use:

    • Excellent for media studies, instructional design, or communication technology courses.

    • Enables faculty and students to explore visual storytelling, audio-visual media creation, and data ethics.

  • Caution:

    • Generated outputs must adhere to ethical media use and copyright guidelines.

    • Disclose AI-generated media in educational or publication contexts.


Best Practices for AI Multimedia Creation

When using multimedia AI tools such as DALL·E, Midjourney, or Stability AI:

  • Credit AI-generated content clearly (include the tool name and date).

  • Avoid using copyrighted or personally identifiable material in prompts.

  • Ensure accessibility by providing alt text and transcripts for visuals and audio.

  • Integrate responsibly into coursework — AI tools should enhance, not replace, creativity and original thought.

AI Coding Tools

AI Coding Tools

AI-assisted coding tools can help students, educators, and researchers generate, explain, and optimize code across multiple programming languages.
These tools are designed to enhance efficiency, learning, and innovation in computer science, data analytics, digital humanities, and educational technology.
They can also be used by librarians supporting digital scholarship, data literacy, or open science projects.

Always review and test AI-generated code carefully to ensure it is accurate, ethical, and secure.


GitHub Copilot

  • Description: GitHub Copilot, developed by GitHub and OpenAI, is an AI-powered coding assistant that helps generate, complete, and refactor code in real time.

  • Access: GitHub Copilot Overview

    • Available as an extension in Visual Studio Code, JetBrains IDEs, and GitHub Codespaces.

    • Free for verified students, teachers, and open-source developers.

    • Paid plans for professional and enterprise use.

  • Features:

    • Suggests code completions and solutions as you type.

    • Provides inline documentation and explanations of programming logic.

    • Supports multiple programming languages (Python, R, JavaScript, C++, HTML/CSS, etc.).

    • Integrates directly with GitHub repositories for version control.

  • Academic Use:

    • Assists computer science and data science students in learning programming concepts.

    • Helps faculty and researchers automate repetitive coding tasks or data processing.

    • Supports digital scholarship projects requiring code-based workflows (data cleaning, text mining, visualization).

  • Caution:

    • Generated code may include snippets based on open-source data—verify licenses before publication.

    • Avoid using Copilot to process confidential or proprietary datasets.


Code Llama (Meta)

  • Description: Code Llama is a free, open-source large language model developed by Meta (Facebook AI Research), designed to generate and explain code in many programming languages.

  • Access: Code Llama Information Page

    • Available for free download and local use or through AI hosting platforms like Hugging Face.

  • Features:

    • Supports multiple coding languages, including Python, Java, C++, and SQL.

    • Capable of code completion, debugging, and code explanation.

    • Can be run locally for enhanced privacy—no data sharing required.

    • Built upon Llama 2, Meta’s advanced open-source language model.

  • Academic Use:

    • Excellent for programming education, research reproducibility, and data science instruction.

    • Can serve as a teaching tool for code comprehension or as a support resource for open-source digital projects.

    • Encourages ethical AI use through open access and transparency.

  • Caution:

    • Requires moderate technical setup knowledge to deploy locally.

    • AI-generated code should be tested, reviewed, and documented before academic submission or publication.


Best Practices for AI Coding Tools

When integrating AI-assisted programming tools into academic or research workflows:

  • Review and verify code for accuracy, efficiency, and security vulnerabilities.

  • Acknowledge AI contributions in research documentation or publication methods sections.

  • Respect licensing and intellectual property—do not use generated code without checking open-source permissions.

  • Use locally hosted or institution-approved tools for projects involving sensitive or restricted data.

Artificial Intelligence in Physical Therapy Articles

Mehrotra, S., Degachi, C., Vereschak, O., Jonker, C. M., & Tielman, M. L. (2024). A systematic review on fostering appropriate trust in Human-AI interaction: Trends, opportunities and challenges. ACM Journal on Responsible Computing1(4), 1-45.

 

Fehr, J., Jaramillo-Gutierrez, G., Oala, L., Gröschel, M. I., Bierwirth, M., Balachandran, P., ... & Lippert, C. (2022, September). Piloting a survey-based assessment of transparency and trustworthiness with three medical AI tools. In Healthcare (Vol. 10, No. 10, p. 1923). MDPI.

 

Tsoi, A. H., Gartner, G., Cotten, S. W., Kim, J., Nazarian, J., Thomas, J., ... & Rimal, R. (2025). Establishing and implementing a responsible artificial intelligence framework: a 1-year review. Journal of the American Medical Informatics Association32(11), 1778-1784.

Kim, M., Sohn, H., Choi, S., & Kim, S. (2023). Requirements for trustworthy artificial intelligence and its application in healthcare. Healthcare Informatics Research29(4), 315-322.

Kim, M., Sohn, H., Choi, S., & Kim, S. (2023). Requirements for trustworthy artificial intelligence and its application in healthcare. Healthcare Informatics Research29(4), 315-322.

Kim, C., Gadgil, S. U., & Lee, S. I. (2025). Transparency of medical artificial intelligence systems. Nature Reviews Bioengineering, 1-19.

Ahadian, P., Xu, W., Liu, D., & Guan, Q. (2025). Ethics of trustworthy AI in healthcare: Challenges, principles, and practical pathways. Neurocomputing, 131942.

de-Manuel-Vicente, C., Fernández-Narro, D., Blanes-Selva, V., García-Gómez, J. M., & Sáez, C. (2024). A Development Framework for Trustworthy Artificial Intelligence in Health with Example Code Pipelines. medRxiv, 2024-07.

Weiner, E. B., Dankwa-Mullan, I., Nelson, W. A., & Hassanpour, S. (2025). Ethical challenges and evolving strategies in the integration of artificial intelligence into clinical practice. PLOS digital health4(4), e0000810.

Markus, A. F., Kors, J. A., & Rijnbeek, P. R. (2021). The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies. Journal of biomedical informatics113, 103655.

 

 

Afroogh, S., Akbari, A., Malone, E., Kargar, M., & Alambeigi, H. (2024). Trust in AI: progress, challenges, and future directions. Humanities and Social Sciences Communications11(1), 1-30.

Aldhahi, M. I., Alorainy, A. I., Abuzaid, M. M., Gareeballah, A., Alsubaie, N. F., Alshamary, A. S., & Hamd, Z. Y. (2025, February). Adoption of Artificial Intelligence in Rehabilitation: Perceptions, Knowledge, and Challenges Among Healthcare Providers. In Healthcare (Vol. 13, No. 4, p. 350). MDPI.

Alsobhi, M., Sachdev, H. S., Chevidikunnan, M. F., Basuodan, R., KU, D. K., & Khan, F. (2022). Facilitators and barriers of artificial intelligence applications in rehabilitation: a mixed-method approach. International Journal of Environmental Research and Public Health19(23), 15919.

Hao, J., Yao, Z., & Siu, K. C. (2025). Artificial intelligence in physical therapy education: evaluating clinical reasoning performance in musculoskeletal care using ChatGPT. Musculoskeletal Care23(3), e70177.

Lindbäck, Y., Schröder, K., Engström, T., Valeskog, K., & Sonesson, S. (2025). Generative artificial intelligence in physiotherapy education: great potential amidst challenges-a qualitative interview study. BMC Medical Education25(1), 603.

Lowe, S. W. (2024). The role of artificial intelligence in Physical Therapy education. Bulletin of Faculty of Physical Therapy29(1), 13.

Naqvi, W. M., Shaikh, S. Z., & Mishra, G. V. (2024). Large language models in physical therapy: time to adapt and adept. Frontiers in public health12, 1364660.

Rasa, A. R. (2024). Artificial intelligence and its revolutionary role in physical and mental rehabilitation: a review of recent advancements. BioMed Research International2024(1), 9554590.

Reoli, R., Marchese, V., Duggal, A., & Kaplan, K. (2025). Student perceptions of artificial intelligence in Doctor of Physical Therapy Education. Physiotherapy Theory and Practice, 1-6.

Severin, R., & Gagnon, K. (2025). An early snapshot of attitudes toward generative artificial intelligence in physical therapy education. Journal of Physical Therapy Education39(3), 214-220.

Shawli, L., Alsobhi, M., Chevidikunnan, M. F., Rosewilliam, S., Basuodan, R., & Khan, F. (2024). Physical therapists’ perceptions and attitudes towards artificial intelligence in healthcare and rehabilitation: A qualitative study. Musculoskeletal Science and Practice73, 103152.

Shawli, L., Alsobhi, M., Chevidikunnan, M. F., Rosewilliam, S., Basuodan, R., & Khan, F. (2024). Physical therapists’ perceptions and attitudes towards artificial intelligence in healthcare and rehabilitation: A qualitative study. Musculoskeletal Science and Practice73, 103152.

Sumner, J., Lim, H. W., Chong, L. S., Bundele, A., Mukhopadhyay, A., & Kayambu, G. (2023). Artificial intelligence in physical rehabilitation: A systematic review. Artificial Intelligence in Medicine146, 102693.

Zhang, Q., & Rapport, M. J. (2025). Purposeful Integration of Artificial Intelligence in Evidence-Based Practice Course for Doctor of Physical Therapy Students. Internet Journal of Allied Health Sciences and Practice23(2), 13.