Zoom’s Updated AI Policy Draws Concern From Privacy Experts

.article-native-ad { border-bottom: 1px solid #ddd; margin: 0 45px; padding-bottom: 20px; margin-bottom: 20px; } .article-native-ad svg { color: #ddd; font-size: 34px; margin-top: 10px; } .article-native-ad p { line-height:1.5; padding:0!important; padding-left: 10px!important; } .article-native-ad strong { font-weight:500; color:rgb(46,179,178); }

Master evolving third-party data. Our guide with Datonics debunks misconceptions, offers insights for data selection and respects privacy. Download now.

Zoom has made changes to its AI strategy—twice.

The video conferencing platform updated its terms of service to establish the right to use some user-level data to train its artificial intelligence/machine-learning models, without giving customers the option to opt out.

Soon after a public outcry, the platform made more changes to its terms of service.

Top line

As of July 27, Zoom’s revised TOS said it could collect and use “service-generated data” related to product usage, telemetry and diagnostics to train AI models. It did not give users the option to opt out.

After drawing criticism from privacy experts on social media earlier today, Zoom updated its TOS to quell public concerns. Zoom admins could now choose whether or not their data from meetings can be used “improve the performance and accuracy of these AI services.”

“We’ve updated our terms of service to further confirm that we will not use audio, video or chat customer content to train our artificial intelligence models without your consent,” a Zoom spokesperson told Adweek.

However, the new update by Zoom is still unclear on how it will ask for consent, “and if they do so in a way that will highlight this exposure of information,” according to Violet Sullivan, vp of client engagement for Redpoint Cybersecurity and a privacy law professor at Baylor Law School.

Between the lines

Zoom’s policy changes come amid a growing public discourse on the ethical boundaries of artificial intelligence models being trained using people’s data, whether aggregated or anonymized.

Earlier in June, Zoom launched two generative AI offerings—a meeting summary tool and a tool for composing chat messages—made available on a free trial basis for customers, who can decide whether or not to use them.

However, when a person agrees to enable Zoom’s features, the platform also requests users’ consent to allow the collection of their data to train its AI models.

The TOS states that customers consent to Zoom’s access, use, collection, creation, modification, distribution, processing, sharing, maintenance and storage of service-generated data for “any purpose,” including “machine learning or artificial intelligence (including for the purposes of training and tuning of algorithms and models).”

In another section of its TOS, the company states that customers “agree to grant and hereby grant Zoom a perpetual, worldwide, non-exclusive, royalty-free, sublicensable and transferable license” to use their data for “product and service development,” including machine learning and artificial intelligence models.

Zoom’s new requirements could have big implications, especially within the telehealth field subject to stringent privacy laws.

“We will not use customer content, including education records or protected health information, to train our artificial intelligence models without [user] consent,” a Zoom spokesperson told Adweek.

Experts, however, still have concerns. “The other question is, will employees be able to grant access to the entire company?” Sullivan said, noting that this could lead to spilling of trade secrets to train AI models, especially for brands that use Zoom on a daily basis.

Bottom line

Zoom—which saw wide adoption during the Covid-19 pandemic—is not new to privacy criticism. The company was hit with an $85 million class action lawsuit in April last year for security issues that enabled hackers to crash virtual meetings, known as Zoom bombing.

Sweeping policy changes, especially to keep up with ever-evolving technologies such as GenAI, are inevitable. How Zoom’s new policies play out in privacy-sensitive situations remains to be seen.

Meanwhile, artificial intelligence platforms from OpenAI’s ChatGPT to Google’s Bard, as well as image-generation tools like Midjourney, have drawn criticism for being trained on public data.

.font-primary { } .font-secondary { } #meter-count { position: fixed; z-index: 9999999; bottom: 0; width:96%; margin: 2%; -webkit-border-radius: 4px; -moz-border-radius: 4px; border-radius: 4px; -webkit-box-shadow: 0 0px 15px 4px rgba(0,0,0,.2); box-shadow:0 0px 15px 4px rgba(0,0,0,.2); padding: 15px 0; color:#fff; background-color:#343a40; } #meter-count .icon { width: auto; opacity:.8; } #meter-count .icon svg { height: 36px; width: auto; } #meter-count .btn-subscribe { font-size:14px; font-weight:bold; padding:7px 18px; color: #fff; background-color: #2eb3b2; border:none; text-transform: capitalize; margin-right:10px; } #meter-count .btn-subscribe:hover { color: #fff; opacity:.8; } #meter-count .btn-signin { font-size:14px; font-weight:bold; padding:7px 14px; color: #fff; background-color: #121212; border:none; text-transform: capitalize; } #meter-count .btn-signin:hover { color: #fff; opacity:.8; } #meter-count h3 { color:#fff!important; letter-spacing:0px!important; margin:0; padding:0; font-size:16px; line-height:1.5; font-weight:700; margin: 0!important; padding: 0!important; } #meter-count h3 span { color:#E50000!important; font-weight:900; } #meter-count p { font-size:14px; font-weight:500; line-height:1.4; color:#eee!important; margin: 0!important; padding: 0!important; } #meter-count .close { color:#fff; display:block; position:absolute; top: 4px; right:4px; z-index: 999999; } #meter-count .close svg { display:block; color:#fff; height:16px; width:auto; cursor:pointer; } #meter-count .close:hover svg { color:#E50000; } #meter-count .fw-600 { font-weight:600; } @media (max-width: 1079px) { #meter-count .icon { margin:0; padding:0; display:none; } } @media (max-width: 768px) { #meter-count { margin: 0; -webkit-border-radius: 0px; -moz-border-radius: 0px; border-radius: 0px; width:100%; -webkit-box-shadow: 0 -8px 10px -4px rgba(0,0,0,0.3); box-shadow: 0 -8px 10px -4px rgba(0,0,0,0.3); } #meter-count .icon { margin:0; padding:0; display:none; } #meter-count h3 { color:#fff!important; font-size:14px; } #meter-count p { color:#fff!important; font-size: 12px; font-weight: 500; } #meter-count .btn-subscribe, #meter-count .btn-signin { font-size:12px; padding:7px 12px; } #meter-count .btn-signin { display:none; } #meter-count .close svg { height:14px; } }

Enjoying Adweek’s Content? Register for More Access!

https://www.adweek.com/programmatic/privacy-experts-remain-skeptical-of-zooms-updated-ai-policy/