With the increasing integration of artificial intelligence (AI) tools—such as ChatGPT, Copilot, Bard Gemini, Claude and other generative AI applications known as large language models (LLM), diffusion models, or generative AI applications—into university activities, it's essential to use these technologies responsibly. This guidance, developed collaboratively by the Office of Legal Affairs, University Compliance Services, the Information Security Office, and the Business Contracts Office, outlines acceptable practices for utilizing generative AI tools while safeguarding institutional, personal, and proprietary information. Additional guidance may be forthcoming as circumstances evolve.
Allowable Use:
- Public or Published Data: Data that is publicly available or classified as Published university information (UT Data Classification Standard), as defined by the UT Data Classification Standard, may be used freely with AI tools.
- Controlled or Confidential Data: Data classified as Controlled or Confidential university information (UT Data Classification Standard) can be used with AI tools that are managed by the university and covered by contracts explicitly protecting university data. These contracts should ensure that the data is not utilized for training models or is isolated in a separate instance inaccessible to external parties and web search functionality must be disabled.
- Authorized AI Tools: The Local and Cloud Services Decision Matrix provides information on AI tools and the types of university data authorized for use with each tool.
- Acceptable Use: In all cases, use should be consistent with the Acceptable Use Policy.
Prohibited Use:
- Unauthorized AI Tools: AI tools that lack a university contract and appropriate data-sharing controls are not approved for use with Controlled or Confidential university information. This includes free or non-UT-managed versions of AI tools like ChatGPT and Copilot, and AI Detection software.
- Sensitive Information: Student records subject to FERPA, health information, proprietary information, and any other data classified as Confidential or Controlled must not be used with unauthorized AI tools.
- Heightened Scrutiny: Texas state law prohibits state agencies, including UT Austin, from deploying AI intended to autonomously make or be a controlling factor in making consequential decisions without meaningful human review. Consequential decisions include but are not limited to benefits eligibility; financial aid; admissions and dismissal; grading; licensing; recruitment, hiring, termination, discipline, or other personnel decisions; student conduct and academic integrity; child welfare; law enforcement; legal analysis or advice; production or evaluation of: academic work product, curriculum design, instructional and testing materials, and academic or scientific research; accreditation; and export control; and any other decision that has a significant effect of the provision, denial, or conditions of a person’s access to a government service. UT Austin’s CISO must review AI tools to be used for any of these purposes prior to their procurement, development, deployment, or use. Contact the Information Security Office at security@utexas.edu for this review.
- Fraudulent or Illegal Activities - AI tools must not be used for activities that are illegal, fraudulent, or violate any state or federal laws, or UT Austin or UT System policies.
Additional Guidance:
- Personal Liability: Be aware that accepting click-through agreements without delegated signature authority may result in personal responsibility for compliance with the terms and conditions of the AI tool. [1].
- Vendor and Third-Party Compliance: When engaging with AI tools provided by external vendors, ensure compliance with IRUSP Standard 22, which outlines requirements for vendor and third-party controls and compliance.
AI Efforts on Campus:
- For more information on how to use AI responsibly in teaching and learning, please reference the Office of Academic Technology's guidance Responsible Adoption of AI Tools for Teaching and Learning and the Big 6 Limitations of Using AI for Learning
- Explore the UT.AI Hub for solutions, training, and community conversations that power your AI journey.
- For further guidance on the use of AI Tools for teaching and learning, please see the following guidance from the Center for Teaching and Learning.
Guidance on Appropriate Use
For questions regarding the appropriate use of ChatGPT and other AI Tools, please contact the UT Information Security Office at security@utexas.edu.
References
- Educator considerations for ChatGPT
- OpenAI sharing & publication policy
- OpenAI usage policies
- OpenAI privacy policy
- OpenAI terms & policies
- [1] Delegation of Authority: To find out who has signature authority at UT Austin to “sign” a click-through agreement, please see the following page.
Revision History
| Date | New | Original |
|---|---|---|
| 04/23/26 | Added Heightened Scrutiny AI to the prohibited use list to align with Texas state law. | |
| 10/29/25 |
| |
| 10/22/25 |
| |
| 4/7/25 | Update title to "Acceptable Use of AI Tools" and updated links to UT.AI + CTL | |
| 11/21/24 |
| |
| 09/04/2024 |
|
|
| 07/20/2023 | First published |