With the emergence of ChatGPT, Bard and other large language model generative artificial intelligence tools, hereinafter collectively referred to as “AI Tools”, many members of our community are eager to explore their use in the university context. This advisory, which is jointly produced by the Office of Legal Affairs, University Compliance Services, Information Security Office, and the Business Contracts Office, provides guidance on how to acceptably use these AI Tools safely, without putting institutional, personal, or proprietary information at risk. Additional guidance may be forthcoming as circumstances evolve.

Allowable Use:

Prohibited Use:

  • At present, any use of ChatGPT or similar AI Tools cannot use any personal, confidential, proprietary, or otherwise sensitive information unless a university contract is in place that specifically protects such university data from being used by training models or otherwise isolates university data into a separate instance that is not accessible by parties external to the university. In general,  student records subject to FERPA, health information, proprietary information, and any other information classified as Confidential or Controlled university data must not be used with AI Tools. 
  • Similarly, ChatGPT or similar AI Tools must not be used to generate output that would be considered non-public. Examples include, but are not limited to generating proprietary or unpublished research; legal analysis or advice; recruitment, personnel or disciplinary decision making; completion of academic work in a manner not allowed by the instructor; creation of non-public instructional materials; and grading. 
  • Please also note that the company that owns ChatGPT, OpenAI ,explicitly forbids the use of ChatGPT and their other products for certain categories of activity, including fraud and illegal activities. This list of items can be found in their usage policy. AI Tools of any sort may not be used for any activity that would be illegal, fraudulent or a violation of any state or federal law, or UT Austin or UT System policies.

Additional Guidance:

For further guidance on the use of ChatGPT or other AI Tools for teaching and learning, please see the following guidance from the Center for Teaching and Learning.

Rationale for the Above Guidance:

  • No UT Agreement, No Privacy and Security Terms: All content entered into, or generated by, ChatGPT is available to ChatGPT, its parent company, OpenAI, and their employees. There is currently no agreement between UT Austin and OpenAI,   Microsoft or other AI Tools that would provide data security and privacy protections required by UT policy with regard to ChatGPT,OpenAI’s , or other AI Tools’ programming interface. Consequently, the use of ChatGPT or other AI Tools at this time could expose individual users and UT to the potential loss and/or abuse of sensitive data and information. 
    • As of May 2023, the UT Austin Business Contracts Office is working on this issue. We hope to see this addressed in the near future and will update this guidance when additional information is available.
  • Personal Liability: ChatGPT and other AI Tools use click-through agreements. Click-through agreements, including OpenAI and ChatGPT and other AI Tools’ terms of use, are contracts. Individuals who accept click-through agreements without delegated signature authority may face personal consequences, including responsibility for compliance with terms and conditions [1].

Guidance on Appropriate Use

For questions regarding the appropriate use of ChatGPT and other AI Tools, please contact the UT Information Security Office at security@utexas.edu