Management Topic: Artificial Intelligence FAQs
Dear Colleagues,
This Month’s Management Tips article is Artificial Intelligence FAQs, the final one in our three-part technology series. This guide is designed to answer some of the most common questions about AI, a rapidly evolving field that is transforming the way we do business here at UCLA and in our personal lives.
1. What is an AI prompt and how do you start one?
An Artificial Intelligence (AI) prompt is a conversational way of interacting with a modern generative AI model. You type as you would speak in natural language, sharing context, questions, facts, and providing information to the AI model. Then in return, you receive a natural language response. Examples vary by area, but more generalized tools like ChatGPT, Gemini, and Copilot can answer questions from prompts or perform tasks based on a variety of subjects and background information.
As an example, if you were a digital artist, you could type something such as:
“Generate me an image of a cat in the style of Vincent Van Gogh with at least three chairs in the background. The cat should be jumping on to one of the chairs, knocking the other two down.”
Prompts can include information about visual style, imagery, detail, etc.
A prompt for someone trying to research a term paper in world history may be something such as:
“Generate me an outline for a term paper on civil rights in the 1960s in the United States. The outline should include background on the state of politics, society and culture during those times. I need to make three coherent supporting arguments that deal with the role of law enforcement and civil rights, along with the federal government’s role leading to the Civil Rights Act of 1964, and finally how that time period is influencing modern times (post pandemic 2023 and beyond).”
2. What ethical considerations should I be aware of before using AI?
First, understand that the prompts you type using consumer AI models such as ChatGPT, Gemini and CoPilot help train the model and can be used for marketing purposes. Sensitive information such as proprietary, business-discreet, or employee-sensitive information should not be provided. See ITS’s and the University Chief Information Security Officer’s (CISO) recommendations and guiding principles for responsible use of AI. Currently, UCLA employees have access to Copilot using their UCLA Logon credentials, which provides commercial data protections that protect UCLA’s information (unlike consumer versions, your prompts will not be used to train models nor for marketing purposes).
University of California Office of the President’s (UCOP) AI Working group has developed eight areas of responsible AI. These include transparency indicating what data was used to train the model; fairness and accuracy in the model’s predictions (and confidence in those predictions), and attention to human values amongst other areas. Read more about these principles here.
Finally, be cautious of where and how you integrate AI into tasks and business processes. You shouldn’t use AI to complete your annual training as a faculty member, administrative employee, or student, nor should you be using AI to evaluate your peers, friends, or colleagues based on private information. When in doubt, if you are concerned about a human doing it, you should also be concerned about AI doing it!
Further reading:
- UCLA GenAI Guidance Site
- UC Responsible AI Use Recommendation Article
- Responsible AI Final Report (UC)
3. Is the use of AI considered plagiarism?
This is a complex topic, involving understanding of the source data and intellectual property rights and licenses surrounding the training of an AI model. In addition, the U.S. Federal District Court decided in 2023 that AI output does not constitute copyrightable work. In a university setting, faculty and specific course instructors typically own that judgment call, and departments and specific instructors may define or tailor specific considerations on top of that. At UCLA, the Academic Senate provides GenAI guidance and resources for Teaching and Learning to instructors. Additionally, students should remember that the UCLA Student Conduct Code applies to GenAI, and states that “Unless otherwise specified by the faculty member, all submissions…must either be the Student’s own work, or must clearly acknowledge the source.”
4. Will the AI output be university-specific or generic to the public? Who feeds the data?
Generally available tools such as ChatGPT, Gemini and CoPilot utilize publicly available data and do not provide university-specific responses. Although generic, the models trained on this data are extremely powerful and usable in a university setting; however, they will lack the specific internal context and data that would be necessary to fully realize their potential. Some of the tools are available in enterprise, internal form, such as CoPilot, Gemini and (soon) ChatGPT Enterprise. As opposed to the publicly available tools, these internal tools can be used to summarize and analyze UCLA documents or information, but the model will NOT be trained on it. In the coming months, ITS alongside campus stakeholders will partner on pilots for university-specific content.
5. When the university introduces AI in stages, which group will be the pilot? Will it be students, faculty, or staff?
Right now, there are two specific pilots involving both administrative operations as well as selected faculty and staff. Additional offerings will be released to students in the Fall. Those pilots are for Google Gemini and Microsoft CoPilot. OpenAI and its ChatGPT product will be available soon. Information from these pilots is being used to tailor campus-wide rollouts. Available GenAI Tools can be found on UCLA’s Generative AI website.
6. How can we leverage AI to support the analysis of feedback and improve our services?
AI can be used in a number of different ways for analyzing guest feedback. First, it is recommended that internal versions such as Gemini and CoPilot be used rather than the publicly available versions of the software since the feedback potentially contains business-sensitive information. Soon, ChatGPT Enterprise will be available for this context.
Once given the guest feedback in the form of prompts, and prompt context, that feedback could be used to ideate and tailor both improvements to existing services, and potentially new services based on the feedback, knowledge of the university and other pertinent information, as well as information input into the prompt to guide the AI agent in its assistance.
7. How can we elevate the visual experiences offered to our potential guests using AI-curated media? (ex: room tours, food photography, etc.)
AI models such as DALL-E 3 (available from ChatGPT Enterprise) as well as Google’s Gemini have the ability to generate imagery media as output. Prompts could be tailored to create AI generative assisted room tours, food photography and other media assets for use in improving visual experiences for potential guests. Follow the same prompt guidance provided earlier.
8. What opportunities for language inclusion does AI offer?
AI models are trained on a multitude of languages. Google’s Gemini, Microsoft’s CoPilot and ChatGPT enterprise from OpenAI are all multilingual. Many language models can convert from English to hundreds of languages and vice versa. This is based on neural machine translation work, pioneered by researchers including our own Chief Data and Artificial Intelligence Officer (CDAIO) and his efforts to supervise the construction of 600:1 (many to one English machine translation model).
Use cases include looking at an image in another language and translating that to your own language; taking existing food service and guest and location/access descriptions in English and translating them to the specific user language of choice with high fidelity and accuracy.
9. How are these opportunities made accessible to communities with low technology literacy?
The common denominator in the modern AI revolution post ChatGPT/November 2022 is ease of use. You type to these AI models and tools in natural language, and you iteratively have a conversation with them, with the reply coming back in a natural language. You can ask the model to soften its tone; to be inclusive in its language, to consider ethical approaches to its responses, and so on, just like you would a human.
10. How could we use AI to track trends in work-related injuries, monitor high-risk behaviors, or reward safe behaviors? Will the AI include input from the private sector to provide benchmarks?
AI models, including modern versions of large language models (LLMs) like ChatGPT, Gemini and CoPilot, can be trained using new data. This can be done at the prompt engineering level or in the creation of a new foundation model trained offline and statically. If there is consideration of including work related injuries, or high-risk behaviors, we need to leverage data classification and sensitivity levels including work done by our Office of the Chief Information Security Officer (CISO) and understand the business sensitivity of the data being shared. This includes whether it’s business proprietary, or sensitive, and other levels of classification such as those recommended by UCOP.
11. Does UCLA plan to give a presentation on the policy-making scheme at the state/national level?
This is something that we could consider doing through the work of the CDAIO and institutional administrative operations. Our new CDAIO is helping to co-chair the California Lawyer’s Association AI Task Force, and additionally contributed to the Biden Executive Order on AI at the federal level in his prior role as Chief Technology and Innovation Officer (CTIO) at NASA JPL. First, UCLA needs to focus on campuswide AI literacy initiatives and implementing the UCOP 2021 AI Working Group Recommendations include the creation of an AI Council; helping to streamline the AI technology procurement process, implementing AI in a responsible and ethical manner and creating a campus wide AI inventory.
12. What are the benefits of using AI in the workplace?
AI can co-exist with humans by speeding up repetitive and monotonous tasks by helping to achieve greater throughput and output through robotic process automation. It can also help with tasks such as visual recognition and object recognition more efficiently and accurately than a human can do, as well as speech detection and natural language understanding and translation.
13. How does AI impact job roles and responsibilities?
Part of implementing AI in an ethical manner includes upskilling and identifying paths for jobs that may be impacted by automation to ensure ways to leverage human assistance in overseeing AI that takes on the monotonous and repetitive tasks. This allows humans across many functions to focus on more creative, complex, and meaningful tasks instead of monotonous tasks.
14. What skills do I need to work effectively with AI?
UCLA ITS is working on a “playlist” of recommended AI coursework already available on platforms like LinkedIn Learning to help educate our workforce in AI literacy. This will include topics such as machine learning, classification, regression, and higher-level topics like neural networks and use cases. There are many classes available already, so having a playlist will be a way of navigating through the class deluge to a set of important skills for our workforce.
15. How secure is the data used and processed by AI?
Data used and processed by AI using public tools like ChatGPT, Gemini and CoPilot instead of the institutionally purchased internal versions of these tools should not be considered secure at all. Any prompt input into the public tools trains those tools and will be part of the public versions of them.
You can visit the UCLA GenAI Guidance site to review the availability of GenAI tools and approved data classification levels. Please consider reviewing UCOP’s recommended data classifications and please partner with the CDAIO and CISO offices in ITS to discuss your use cases.
16. Can AI replace human judgment in decision-making?
In general, no; however, there are tasks such as visual recognition, audio speech to text, natural language processing, and other tasks for which AI is generally considered better than (most) humans. AI is dependent on the data it is fed and the prompts it is given (in the case of Large Language Models (LLMs)) and still depends on humans to help it interact with the environment. So, in general, it will not replace human decision-making any time soon.
17. What are the costs associated with AI implementation?
AI tools can be costly for the training portions, and in some cases like LLMs cost is dependent on how it is used. You pay for the size of the questions and number of iterations asked, along with cost to initially train (amortized) and run the model (inference) each time. ITS is working with Purchasing on methods for departments to procure Gemini and CoPilot. Additionally, our next pilot for OpenAI will provide input into costing, and tools and services needed for departments to run these AI capabilities.
18. How does AI affect collaboration and communication?
AI is already being integrated into tools like Zoom, Teams and Google Meet in the form of summaries of meetings, visual recognition and in-speech recognition for accessibility including closed captions and other areas. ITS and its collaboration services, including Digital Spaces along with our Digital Foundry and CDAIO, can help navigate these tools and capabilities.
19. What is the future of AI in higher education?
It is safe to assume that AI will be a standard tool in every campus setting within the next 2-4 years. In 2023, 4 in 10 teachers surveyed were using AI in their classrooms and this is only predicted to increase.
20. Is there a repository I can search through to see what might be helpful/useful for me?
Currently, these tools are not present in a single repository, and are being procured on a license-by-license basis. We encourage those interested to get in touch with ITS and also browse available GenAI tools.
21. How do I keep UCLA data safe?
One of the key recommendations is to realize that the public versions of the AI tools make no safeguards about data, including and most importantly the prompts typed into these models. They will be used to train future versions of the model, so the classifications and sensitivities of data related to UCOP’s recommendations should be consulted, and care should be taken not to use these public tools with sensitive information.
22. Where can I go for more resources?
Visit UCLA's Generative AI Site and also our ITS website. Please contact Chris Mattmann, CDAIO (chris.mattmann@ucla.edu) for more information.
23. What are we doing to actively prepare campus for AI?
Right now we are running pilots, experimenting and we have hired the first CDAIO in the UC system to focus on preparing the campus for AI, including partnering administratively across all departments.
Conclusion
The integration of AI tools at UCLA presents both exciting opportunities and significant challenges. As these technologies evolve, they offer innovative ways to enhance teaching, learning and administrative processes. It is crucial for staff, faculty and students to navigate those tools responsibly. As we move forward, ongoing dialogue and education about AI will be essential in preparing our community to effectively and ethically engage in these powerful tools.
The following UCLA Information Technology Services staff provided answers based on questions generated by UCLA Administration’s 20XX Leadership group:
Ilana Intonato - Executive Director of Academic Technology
Jon Crumpler - Executive Director, Customer Success
Mikel Etxeberria - Director, Digital Foundry
Chris Mattmann - Chief Data and Artificial Intelligence Officer
Interested in reviewing prior months’ topics? Visit our Monthly Management Tips website.
Do you have feedback, questions or a suggested topic you would like to learn more about? Please email: managementtips@ucla.edu.
Want to receive Monthly Management Tips emails? Sign up for our list!