Utilizing ChatGPT or other AI-powered systems in a professional setting necessitates strict adherence to information security protocols. This ensures that sensitive data remains protected and the overall environment remains secure.
Here are some key guidelines to keep in mind:
Data Protection: Ensure that any sensitive or confidential information disclosed during chat sessions remains safeguarded. Refrain from sharing personally identifiable information (PII) or any delicate university-related data with AI systems.
Video Conference Sessions: Artificial Intelligence (AI) bots such as Otter.AI and Read.AI are increasingly prevalent in virtual meetings conducted via Zoom, often added by the host.
While these tools may enhance convenience, they can also pose considerable privacy and security challenges. Much like recorded meetings, data is sent to the cloud where sensitive information could potentially be accessed.
If you are in a meeting where a bot is present, do not hesitate to express your privacy concerns and request that recording be stopped. The host possesses the ability to disable these AI bots. It's important to remember that with the advent of new technology and its associated benefits, new challenges and risks can often emerge.
Ethical Risks:
While Artificial Intelligence (AI) undoubtedly brings tremendous capabilities, it also introduces significant ethical risks that warrant careful consideration. Here are some key areas of concern:
- Bias: AI systems learn from data available on the Internet, which may inadvertently reflect discriminatory or biased viewpoints.
- Misinformation: Lacking inherent knowledge or the ability to fact-check, AI systems may propagate misinformation.
- Threat Actors: The powerful capabilities of AI can be exploited by malicious actors to disseminate false information, orchestrate social engineering attacks, or create convincingly deceptive content.
- Privacy Concerns: Interactions with AI often involve the exchange of personal or sensitive information. It is crucial to weigh the privacy and security implications of providing such information to an AI system, as unauthorized access or data breaches could result in the exposure of sensitive data.
- Accountability Challenges: AI systems function as tools, and the onus for their output ultimately falls on the users. This creates complexities in ascertaining responsibility if the system generates harmful or unethical content. As a safety measure, always scrutinize any content produced by AI.
Vendor Due Diligence: When engaging a third-party provider for AI services, it's important to understand the license agreement. ITS should review the vendor's security policies, data safeguarding measures, and adherence to pertinent regulations. This ensures that their security practices are in harmony with ÂÌñ»»ÆÞ's established best practices.
Employee Training and Awareness: All employees must use best practices for interacting with AI systems like ChatGPT and upholding the University's information security standards. Any inquiries or concerns should be promptly directed to the Information Security Team at iso@csueastbay.edu.
Our partners in TLI have a knowledge base with more information on AI Meeting Assistants:
AI Meeting Assistants