

As the cyber landscape evolves and grows increasingly connected, the partnerships, tools and flexible device practices we depend on can also become weak points. For instance, the very third parties businesses rely on for operations can expose them to risks in the event of a cyber incident. Likewise, as artificial intelligence (AI) becomes more prevalent, it also accelerates cyber threats, and while bring-you-own-device (BYOD) policies can help employees work with more flexibility, they may cost the company in terms of risk.
In cases like these, cyber threats can become legal risks as well.
To better understand the scope of these legal concerns, I spoke with three Parker Poe attorneys, Sarah Hutchins, Robert Botkin, and Susie Lloyd. Hutchins directs the firm’s Cybersecurity & Data Privacy Team, advises multi-national corporations regarding federal and state data privacy compliance, and is recognized as a Certified Information Privacy Professional/United States (CIPP/US) by the International Association of Privacy Professionals (IAPP). Furthermore, as a cybersecurity and data privacy attorney, Botkin has experience in examining and mitigating enterprise risks associated with privacy and cybersecurity. As for Lloyd, she represents clients in litigation involving cybersecurity, data breaches, and regulatory compliance.
Below, these attorneys share their insights about the legal considerations organizations should be aware of when it comes to AI and third party relationships.
Security: What legal considerations should organizations keep in mind when it comes to third party relationships? 
Hutchins: Companies have long relied on third-party specialists to conduct and grow their business. The tools and innovation that these service providers offer are necessary to stay ahead in the competitive race. These partnerships, however, are not without risk. When a company allows access to their information — be it customer or employee personal information, or critical business information that is confidential or a trade secret — contractual protections are paramount.
There are many contractual clauses I look for when representing a business that controls the data but needs to share it with a vendor to perform a necessary business function. I pay attention to the protections the vendor promises, such as how they secure the data and whether they promise to use it only for performing tasks under the agreement. I especially like to see clauses devoted specifically to data breach response, including when the vendor must disclose the possible incident and who takes financial responsibility for the investigation and notification tasks that may follow.
When I look at an agreement with a third party, I also focus heavily on the boilerplate clauses in the agreement — confidentiality, duty of mitigation, force majeure, limitation of liability, and, of course, the indemnity clause. Indemnity clauses often dictate which party will cover the litigation-related costs in the event of a lawsuit from someone outside of the contract. In the wake of a security incident, this could mean an individual, or class of individuals, whose data was impacted by the data breach. It could also be other companies who either had their confidential business information exposed or who had to incur costs related to the breach.
Allocation of these possible costs through an indemnification clause is important given the potential exposure. Security breaches involving third parties are increasingly common. Incurred costs escalate quickly and involve forensic investigation, attorneys’ fees, notification letters to individuals, mitigation efforts, and other significant penalties. Individuals impacted by a security breach — and their class action attorneys — often claim substantial impact. Considering these risks up front — and considering the insurance that may need to be secured — is vital in a world where security incidents are a matter of “when,” not “if.”
When your vendor experiences a security incident, there is an extra layer of uncertainty. You may only learn of the incident through a disruption in service or even in the media. Your business — along with all of your other vendor’s customers — will be scrambling to get answers at the same time, and your service provider may or may not have the resources in place to give you the necessary and timely answers you require. Depending on how much you rely on the service provider, a vendor security incident could leave your business operations entirely disrupted, with nothing but a busy signal when you attempt to get answers.
Here, the old adage is particularly relevant — an ounce of prevention is worth a pound of cure. Take the time to understand the information each of your service providers and other vendors has access to and, where important to ongoing business function, plan for a possible rainy day.
Many businesses have historically had gaps in vendor contracting and management. As a result, the selection, contracting, and onboarding of various vendors — especially those that engage in specialized IT functions — are not always fully understood by management. As a first step in managing vendor risk, companies should understand and document the data these service providers and vendors have access to, how that access is maintained, and the functions the third party provides. A vendor management system should be simultaneously implemented to avoid a repeat gap in knowledge in the future.
Once a full picture of vendor function and access is in place, it is necessary to game out the loss of functions of critical provider services. Some companies are required to have an incident response plan based on applicable regulations; however, all businesses should have one. An incident response plan provides a roadmap for a breach response and is prepared when all stakeholders can be thoughtful and collaborative — well before an actual incident occurs and it is all hands on deck. Planning for a third-party breach — including gathering information about contacts and key contract requirements — is helpful and will give back precious time during a breach response.
Your business interruption prevention plan should also include replicating key business functions in case a provider unexpectedly goes offline. While cost is an important factor in instituting redundancies, thinking through options ahead of time with all key personnel involved will provide a thoughtful plan in the wake of a real incident.
Finally, plan for loss of data. An off-site regular backup plan provides businesses with vital options if ever data is held hostage. Businesses that are able to restore current data from a backup will not be as dependent on an unresponsive vendor or unknown threat actor.
Security: What legal considerations should organizations keep in mind when it comes to AI? 
Botkin: The adoption of generative artificial intelligence (genAI) platforms introduces novel legal risks, many of which are insufficiently addressed by existing regulatory regimes. While AI is increasingly embedded in commercial tools, it is not always transparent how genAI providers will process their users’ data. This opaque environment places the onus squarely on companies and their counsel to anticipate risks and mitigate them through contractual protections and internal governance.
One of the most pressing risks is data leakage, which occurs when business or other sensitive information is entered into an AI system without proper corporate safeguards.
When employees use an AI platform their company has not provided access to — or use a company-approved AI platform but with their personal email sign-on — then the data input by the user is subject to the AI platform’s terms of use rather than any agreement between the company and AI provider. When a company and AI provider enter into an agreement for AI services, it is common for the company to impose restrictions on the AI provider so that the company’s data is only used to provide the AI service and not for training the AI model. When AI platforms retain and train on user inputs, that means confidential business information, personal data, or attorney-client communications may be absorbed into the model and later reflected in responses to unrelated users.
Additionally, many providers log prompts and outputs for system improvement or troubleshooting; if these logs are insecure or accessible to third parties, data can be inadvertently disclosed — giving third parties a trove of valuable data. Moreover, when genAI tools are integrated with other applications through APIs or third-party plug-ins, weak access controls or poor configurations may allow information to flow outside the company’s control. These pathways highlight why data leakage is not just a cybersecurity concern but also an attorney-client privilege, confidentiality, and compliance issue.
Companies leveraging AI should supplement their standard technology and SaaS contracts with clauses tailored to address the risk AI poses. With data use and retention, for example, the agreement should state unequivocally whether customer inputs will be used to train the vendor’s models. If not, the agreement should include a “no training” clause, require deletion of prompts after processing, and prohibit use of data for analytics without consent.
In terms of confidentiality and privilege, AI providers should be bound to treat all inputs and outputs as confidential information, with restrictions on internal access. AI tools that market themselves to the legal industry should have the agreement expressly state that disclosure of attorney-client communications into the tool does not waive privilege and that the AI provider must maintain technical and organizational safeguards.
With intellectual property, agreements should allocate ownership of outputs to the customer and disclaim AI providers’ rights to reuse them. AI providers should indemnify customers against claims that outputs infringe third-party IP, given the opacity of training data sources, and provide uncapped liability.
In terms of audit and oversight, companies should reserve rights to audit or receive detailed reports on data handling, model updates, and security certifications. AI providers should also commit to notifying customers of material changes to model behavior, security practices, or compliance policies.
Security: What legal considerations should organizations keep in mind when it comes to BYOD policies? 
Lloyd: BYOD policies were once hailed as a cost-effective way to boost productivity, reduce costs, and increase flexibility. Companies saved millions by letting employees use personal devices instead of buying and managing corporate hardware. But the landscape has shifted. With genAI entering the workplace, BYOD is no longer just a convenience — it’s a growing liability.
Personal devices lack the consistent security controls of corporate-issued hardware. Employees mix work and personal apps, connect to unsecured networks, and often skip critical updates. When sensitive corporate data lives alongside personal content, the risk of accidental or malicious exposure skyrockets.
Employees increasingly use AI tools to draft emails, analyze data, or even write code, often on the same devices they use for social media, booking healthcare appointments, and shopping. This overlap creates two major risks: proprietary information can end up in AI platforms that store or learn from user inputs, and unapproved tools can bypass governance, making it nearly impossible to track where sensitive data goes.
Eliminating BYOD is not a quick fix. Companies will face higher costs for purchasing and managing corporate devices, implementing Mobile Device Management (MDM), and hiring additional IT staff to handle a larger fleet of endpoints. Device procurement alone, especially for remote or global teams, can be a major expense. The alternative, however, can be far more costly. A single breach can lead to regulatory fines, lawsuits, and reputational damage that dwarf the cost of new hardware and IT resources. The savings that once justified BYOD may no longer outweigh the risk.
Organizations should weigh the financial trade-offs and consider adopting zero trust architecture to continuously authenticate every user and device, defining AI governance policies that specify approved tools while prohibiting inputting sensitive data into public models, and implementing corporate-owned, personally enabled devices as a middle ground between security and flexibility.
The savings that once justified BYOD no longer outweigh the risk. In the genAI era, security is not optional — it is the cost of doing business.
Stay Ahead of Cyber Threats to Protect Your Organization
Mitigating cyber risks also means mitigating legal risks. By taking into account the legal considerations of third party relationships, AI, and BYOD policies, organizations can better protect themselves, their data, and their customer’s privacy.
https://www.securitymagazine.com/articles/101939-cyber-risks-can-be-legal-risks-how-to-protect-the-organization

