With the increasing integration of artificial intelligence (AI) tools—such as ChatGPT, Copilot, Bard, and other applications known as large language models (LLM) or generative AI applications—into university activities, it's essential to use these technologies responsibly. This guidance, developed collaboratively by the Office of Legal Affairs, University Compliance Services, the Information Security Office, and the Business Contracts Office, outlines acceptable practices for utilizing AI tools while safeguarding institutional, personal, and proprietary information. Additional guidance may be forthcoming as circumstances evolve.
Allowable Use:
- Public or Published Data: Data that is publicly available or classified as Published university information (UT Data Classification Standard), as defined by the UT Data Classification Standard, may be used freely with AI tools.
- Controlled or Confidential Data: Data classified as Controlled or Confidential university information (UT Data Classification Standard) can be used with AI tools that are managed by the university and covered by contracts explicitly protecting university data. These contracts should ensure that the data is not utilized for training models or is isolated in a separate instance inaccessible to external parties.
- Authorized AI Tools: The Local and Cloud Services Decision Matrix provides information on AI tools and the types of university data authorized for use with each tool.
- Acceptable Use: In all cases, use should be consistent with the Acceptable Use Policy.
Prohibited Use:
- Unauthorized AI Tools: AI tools that lack a university contract and appropriate data-sharing controls are not approved for use with Controlled or Confidential university information. This includes free or non-UT-managed versions of AI tools like ChatGPT and Copilot.
- Sensitive Information: Student records subject to FERPA, health information, proprietary information, and any other data classified as Confidential or Controlled must not be used with unauthorized AI tools.
- Non-Public Output: AI tools should not be used to generate non-public outputs, such as proprietary or unpublished research, legal analysis or advice, recruitment or personnel decisions, academic work not permitted by instructors, creation of non-public instructional materials, or grading.
- Fraudulent or Illegal Activities - AI tools must not be used for activities that are illegal, fraudulent, or violate any state or federal laws, or UT Austin or UT System policies.
Additional Guidance:
- Personal Liability: Be aware that accepting click-through agreements without delegated signature authority may result in personal responsibility for compliance with the terms and conditions of the AI tool. [1].
- Vendor and Third-Party Compliance: When engaging with AI tools provided by external vendors, ensure compliance with IRUSP Standard 22, which outlines requirements for vendor and third-party controls and compliance.
AI Efforts on Campus:
For more information on how UT is using AI tools to explore new ideas and to solve problems in novel ways, please reference UT’s Year of AI site or learn more about how UT is embracing innovation.
For further guidance on the use of ChatGPT or other AI Tools for teaching and learning, please see the following guidance from the Center for Teaching and Learning.
Guidance on Appropriate Use
For questions regarding the appropriate use of ChatGPT and other AI Tools, please contact the UT Information Security Office at security@utexas.edu.
References
- Educator considerations for ChatGPT
- OpenAI sharing & publication policy
- OpenAI usage policies
- OpenAI privacy policy
- OpenAI terms & policies
- [1] Delegation of Authority: To find out who has signature authority at UT Austin to “sign” a click-through agreement, please see the following page.
Revision History
Date | New | Original |
---|---|---|
11/21/24 |
| |
09/04/2024 |
|
|
07/20/2023 | First published |