At the beginning of the year, Moncler confirmed that it had suffered from a ransomware attack on its systems that led to a headline-making data breach. The leaked data exposed information about the Italian outerwear-maker’s employees and former employees, “some suppliers, consultants and business partners,” and customers. It followed from a data-centric attack on American fashion brand Guess, which was on the receiving end of a data breach in the summer of 2021. In that case, criminals were able to obtain social security numbers, ID numbers (driving licenses and passports), and financial account numbers. Around the same time, Chanel suffered a similar fate with its South Korean operation, which resulted in the leak of names, personal information, and shopping histories.

Instances of cyberattacks and hacking generally should not come as a surprise to brands. A recent Office for National Statistics report showed that while most forms of crimes in the United Kingdom are seeing a downtrend, crimes involving computers and hacking are experiencing a noticeable uptick. The same is true in the U.S., with ransomware attacks, alone, rising by almost 100 percent in 2021 according to SonicWall’s 2022 Cyber Threat Report.

When hacks occur, government agencies, such as the Information Commissioner’s Office (“ICO”) in the United Kingdom and the Federal Bureau of Investigation and relevant State authorities in the U.S., expect companies to deal with them proactively and ensure that any serious breach is resolved effectively. In light of increasing threats of cyberattacks and hacking, including for fashion brands, guidance on how companies – in fashion and beyond – should approach the issue are set out below … 

What do hackers want and how do they get it?

Fashion brands are a gold mine for data that can be exploited. Hackers target clients’ personal information, financial information, and operations and systems, which is all readily available, especially since most players in this space maintain e-commerce shops. Hackers can access such information by way of a data breach, namely, targeted attacks into secure log ins, where they obtain information; ransomware, where access to files or systems is blocked until a ransom fee is paid; and/or denial of service attacks, in which a system or server is flooded with targeted requests, preventing legitimate requests from being fulfilled.

What actions should you take if a breach occurs?

In the UK, the ICO expects a company to take action if it finds itself the victim of a cyberattack or breach, which it defines as “a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorized disclosure of, or access to, personal data.” Primarily, companies are expected to carry out a data breach risk assessment, including by determining whether there a risk that data subjects will be seriously affected by the breach. They are also expected to inform individuals who have been affected by a high-risk data breach without delay, and inform the regulator as soon as practically possible, and in any event, within 72 hours.

In the U.S., the response will vary from state to state. Depending on the severity of the breach, the state attorney general, and eventually customers, may need to be notified with similar notification requirements as found under UK law.

When providing details to affected individuals, a brand needs to inform them, in clear language, of the nature of the breach and what personal data was affected. They should also be provided with details of the relevant contact point or the details of the brand’s data protection officer. It is recommended that individuals are provided with information on how the brand will assist them going forward and any actions they can take to protect themselves. Guidance from the ICO outlines that this may include forcing a password reset; advising individuals to use strong, unique passwords; and telling them to look out for phishing emails or fraudulent activity on their accounts.

If, after a risk assessment, the brand has decided that a notification to the ICO is not necessary, it is still highly advisable that the company records information about the breach and actions taken in response. If the ICO decides that an investigation is necessary, the company may be asked to justify the decisions it made.

Reporting the data breach

If a report to the ICO is necessary, then it is important that the following information is captured and shared with the ICO: the approximate number of affected individuals; how many personal data records were affected; the name of the data protection officer or contact point details; the effects of the breach, and actions taken in response.

Again, in the U.S., while this may vary from state to state, it is likely that the report will contain information that is similar to what is expected by the ICO.

Take home points

If a brand finds itself on the receiving end of a cyberattack or other data breach, it is important to be as prepared as possible. Planning in advance is ideal, and is likely to include contingency measures. However, as it may be difficult to plan for all eventualities, the following best practices will also limit what can be hacked: Do not store sensitive data in clear text – pseudonymize or encrypt, and so not hold onto incomplete or old data, whilst it may not be relevant to your business, it can expose the data subjects to malicious actions from hackers. Ensure access to data is handles on a strict basis, and ensure the company carries out appropriate security policy and regular cyber security training for staff. Finally, carry out regular information risk assessments, and maintain a response and recovery plan.

Vladimir Arutyunyan is an associate in Fox Williams’ commercial and techology team in London.

The Federal Trade Commission (“FTC”) filed suit against Kochava Inc. on August 29, accusing the data broker of selling geolocation data from hundreds of millions of mobile devices. Consumers are often unaware that their location data is being sold and that their past movements can be tracked, according to the FTC. The government agency’s data privacy-centric lawsuit specified that Kochava’s data can be used to track consumers to sensitive locations, including “to identify which consumers’ mobile devices visited reproductive health clinics.”

When the U.S. Supreme Court overturned Roe v. Wade on June 24, many people seeking abortion care found themselves in legal jeopardy. Numerous state laws criminalizing abortion thrust the perilous state of personal privacy into the spotlight. If people want to travel incognito to an abortion clinic, according to well-meaning advice, they need to plan their trip the way a CIA operative might – and get a burner phone. Unfortunately, that still would not be good enough to guarantee data privacy. Using a maps app to plan a route, sending terms to a search engine and chatting online are ways that people actively share their personal data. 

But mobile devices share far more data than just what their users say or type. They share information with the network about whom people contacted, when they did so, how long the communication lasted and what type of device was used. The devices must do so in order to connect a phone call or send an email.

Who’s Talking to Whom?

When National Security Agency whistleblower Edward Snowden disclosed that the NSA was collecting Americans’ telephone call metadata – the Call Detail Records – in bulk in order to track terrorists, there was a great deal of public consternation. The public was rightly concerned about loss of data privacy. Researchers at Stanford later showed that call detail records plus publicly available information could reveal sensitive information, such as whether someone had a heart problem and their arrhythmia monitoring device was malfunctioning or whether they were considering opening a marijuana dispensary. Often you do not have to listen in to know what someone is thinking or planning. Call detail records – who called whom and when – can give it all away.

The transmission information in internet-based communications – IP-packet headers – can reveal even more than call detail records do. When you make an encrypted voice call over the internet – a Voice over IP call – the contents may be encrypted but information in the packet header can nonetheless sometimes divulge some of the words you are speaking

Pocket Full of Sensors

That is not the only information given away by your communications device. Smartphones are computers, and they have many sensors. For your phone to properly display information, it has a gyroscope and an accelerometer; to preserve battery life, it has a power sensor; to provide directions, a magnetometer. Just as communications metadata can be used to track what you are doing, these sensors can be used for other purposes. You might shut off GPS to prevent apps from tracking your location, but data from a phone’s gyroscope, accelerometer, and magnetometer can also track where you are going.

This sensor data could be attractive to businesses. For example, in January 2016 Facebook was granted a patent that relies on the different wireless networks near a user to determine when two people might have been close together frequently – at a conference, riding a commuter bus, etc. – as a basis for providing an introduction. Creepy? You bet. At the same time, Uber, for onstance, knows that people really want a ride when their battery power is low. Is the company checking for that data and charging more? Uber claims it is not, but the possibility is there.

And it is not just apps that get access to this data trove. Data brokers get this information from the apps, then compile it with other data and provide it to companies and governments to use for their own purposes. Doing so can circumvent legal protections that require law enforcement to go to court before they obtain this information.

Data Privacy & Consent

There is not a whole lot users can do to protect themselves. Communications metadata and device telemetry – information from the phone sensors – are used to send, deliver, and display content. Not including them is usually not possible. And unlike the search terms or map locations you consciously provide, metadata and telemetry are sent without you even seeing it. Meanwhile, providing consent is not plausible. There is too much of this data, and it is too complicated to decide each case. Each application you use – video, chat, web surfing, email – uses metadata and telemetry differently. Providing truly informed consent that you know what information you’re providing and for what use is effectively impossible.

If you use your mobile phone for anything other than a paperweight, things like your visit to a clothing store or a cannabis dispensary and your personality – how extroverted you are or whether you are likely to be on the outs with family since the 2016 election – can be learned from metadata and telemetry and shared.

That is true even for a burner phone bought with cash, at least if you plan on turning the phone on. Do so while carrying your regular phone and you will have given away that the two phones are associated – and perhaps even that they belong to you. As few as four location points can identify a user, another way your burner phone can reveal your identity. If you are driving with someone else, they would have to be equally careful or their phone would identify them – and you. Metadata and telemetry information reveals a remarkable amount about you. But you do not get to decide who gets that data, or what they do with it.

The Reality of Technological Life

There are some constitutional guarantees to anonymity. For example, the Supreme Court held that the right to associate, guaranteed by the First Amendment, is the right to associate privately, without providing membership lists to the state. But with smartphones, that is a right that is effectively impractical to exercise. it’s nearly impossible to function without a mobile phone. Paper maps and public payphones have virtually disappeared. If you want to do anything – travel from here to there, make an appointment, order takeout, or check the weather – you all but need a smartphone to do so.

It is not just people who might be seeking abortions whose privacy is at risk from this data that phones shed. It could be your kid applying for a job: For instance, the company could check location data to see if they are participating in political protests. 

There is a way to solve this chilling scenario, and that’s for laws or regulations to require that the data you provide to send and receive communications – TikTok, Snap, YouTube – is used just for that, and nothing else. That helps the people going for abortions – and all the rest of us as well.

Susan Landau is a Professor of Cyber Security and Policy at Tufts University. (This article was originally published by The Conversation.)

Sephora has agreed to pay $1.2 million to settle allegations that it violated the California Consumer Privacy Act (“CCPA”), a state law that limits companies’ collection and sale of consumers’ personal information and provides consumers with expansive rights with respect to their personal information. The LVMH-owned beauty retailer came under fire after an enforcement sweep of online retailers by the California Attorney General’s office revealed that it “failed to disclose to consumers that it was selling their personal information, failed to process user requests to opt out of sale via user-enabled global privacy controls in violation of the CCPA, and did not cure these violations within the 30-day period currently allowed by the CCPA.” The settlement, which is dependent upon court approval, is the first CCPA enforcement action since the law went into effect on January 1, 2020. 

“The settlement with Sephora underscores the critical rights that consumers have under California Consumer Privacy Act to fight commercial surveillance,” California Attorney General Rob Bonta said in a statement on Wednesday. “Consumers are constantly tracked when they go online, [with] many online retailers allowing third-party companies to install tracking software on their website and in their app so that third parties can monitor consumers as they shop. These third parties track all types of data – in Sephora’s case, the third parties could create profiles about consumers by tracking whether a consumer is using a MacBook or a Dell, the brand of eyeliner or the prenatal vitamins that a consumer puts in their ‘shopping cart,’ and even a consumer’s precise location.” 

“Retailers like Sephora benefit in kind from these arrangements, which allow them to more effectively target potential customers,” Bonta says. 

In addition to paying $1.2 million in penalties, as part of the settlement, Sephora will also be required to clarify its online disclosures and privacy policy to include an affirmative representation that it sells data; provide mechanisms for consumers to opt out of the sale of personal information, including via the Global Privacy Control (“GPC”); conform its service provider agreements to the CCPA’s requirements; and provide reports to the Attorney General relating to its sale of personal information, the status of its service provider relationships, and its efforts to honor GPC. 

In response to the settlement, which it says “does not constitute an admission of liability or fault by Sephora,” the New York-headquartered beauty chain pushed back against the Attorney General’s classification of its data practices as the “sale” of data. The CCPA “does not define ‘sale’ in the traditional sense of the term,” a representative for Sephora said in a statement on Wednesday. “‘Sale’ includes common, industry-wide technology practices such as cookies, which allow us to provide consumers with more relevant Sephora product recommendations, personalized shopping experiences and ads.” 

The Sephora settlement comes as part of a larger effort by Bonta’s office to enforce the California Consumer Privacy Act, with Bonta revealing on Wednesday that he sent notices to “a number of businesses” alleging non-compliance relating to their failure to process consumer opt-out requests made via user-enabled global privacy controls. 

Characterized as one of the strongest consumer privacy laws in the country, the law applies to for-profit “businesses” that do business in California, collect California resident personal information (or on behalf of which such information is collected), alone or jointly with others determines the purposes or means of processing of that data; and that either … have at least $25 million in annual revenue, have personal data on at least 50,000 people, or derive at least 50 percent of annual revenue from selling consumers’ personal information must comply with the law. (Companies need not be headquartered – or even have a physical presence – for the law to apply.) 

The CCPA provides a non-exhaustive list of categories of personal information, including name, postal address, address, email address, account name, social security number, driver’s license number, passport number, or other similar identifiers; characteristics of protected classifications under California or federal law; commercial information, including records of personal property, products, or services purchased, obtained, or considered, or other purchasing or consuming histories or tendencies; biometric information; internet or other electronic network activity information, including, but not limited to, browsing history, search history, and information regarding a consumer’s interaction with an internet website, application, or advertisement; geolocation data; professional or employment-related information; and education information that is not publicly available. 

Not only are CCPA actions being initiated by the state’s Attorney General, the CCPA provides a private right of action, which has prompted more than 170 CCPA claims to filed as of earlier this year, a handful of which have targeted retailers. According to Steptoe & Johnson’s Stephanie Sheridan, Meegan Brooks and Surya Kundu, “Courts are continuing to determine what conduct falls within the CCPA’s narrow private right of action, which applies only when a statutorily-defined subset of a California resident’s ‘non-encrypted and non-redacted’ personal information ‘is subject to an unauthorized access and exfiltration, theft, or disclosure as a result of the business’s violation of the duty to implement and maintain reasonable and appropriate security procedures and practices.”

All the while, a new, more aggressive iteration of CCPA, the California Privacy Rights Act, will take effect in 2023, which Sheridan, Brooks and Kundu came “could usher in a new wave of private and public enforcement suits.”

No longer operating the way that they did in a pre-COVID world, fashion and luxury brands have been forced to face the reality that they cannot stay ahead without increasing their data capabilities. “Data will be the key to unlocking the insights needed to adapt to change and to reengage customers in the coming months and years,” McKinsey analysts stated in a note last year, reflecting on the impact of the pandemic on fashion and luxury brands. However, they asserted that the pandemic also “exposed a major shortfall in data gathering and analysis across much of the [fashion and luxury] industry,” meaning that “the sooner fashion and luxury companies learn to harness the power of data, the better.” 

The notion that fashion and luxury brands have access to a wealth of structured data on their customers that can be collected and processed to drive sales, but that they are “often under-utilizing this data, and, furthermore, ignoring the vast reams of unstructured data (such as consumer comments on social media, affluent influencers’ photo feeds on Instagram, and engagements across multi-channel customer journeys) that can be mined to glean invaluable insight into customer lifestyles, shopping preferences, and purchase behaviors,” is enduring in a post-pandemic world.  

A survey conducted by the Luxury Institute’s Affluent Analytics Lab reveals while some brands may have some out of the pandemic stronger thanks to their use of big data, most brands in luxury, across all categories and levels, are not there yet. Most brands appear to be playing at “aspirational” levels when it comes to their data and analytics capabilities, the Affluent Analytics Lab found in connection with a survey of executive-level figures from an array of global luxury goods and services brands, as well as their top consultants, this spring.  

The results indicate that data and analytics processes across most luxury enterprises are “broken,” the Affluent Analytics Lab states, which points to the following as some of the key highlights from its survey … 

Data Collection & Integration Capabilities 
When asked to rate their – or their client’s brand’s – data collection capabilities, a majority of respondents (56 percent) are neutral (34 percent), dissatisfied (20 percent), or very dissatisfied (2 percent). A scant 2 percent said they are very satisfied, while 42 percent are satisfied that their brand’s data collection is adequate. While data collection is the one area in which participants provide the highest ratings in data capabilities, beyond that, internal enterprise data and analytics capabilities ratings go downhill.

When asked to rate whether data collected from various internal (e.g., transaction and website navigation data) and external sources (e.g., vendor third party data) has been integrated into one seamless view of the customer, 72 percent of survey responders stated this spring that this critical step has only been “partially addressed,” while 15 percent stated that it has not been addressed. Only 13 percent said that they feel that this need has been addressed adequately by the enterprise.

Data Access for Analysis 
The ability to access data that is internally stored in one place is important for the various groups within the enterprise – including (but not limited to) logistics, finance, marketing, and sales – to be able to use the data readily. This is also known as data democratization within an enterprise. On this important process, 54 percent of survey participants responded that this is only partially addressed by their employer-company, 28 percent reported that it is not addressed, and a minority (18 percent) said that it is fully addressed.

Analytics Culture & Capabilities
With respect to having cultivated a data-driven, analytics-first mind-set and brand culture within their enterprise, a full two-thirds of responders (67 percent) state that this is only “partially addressed,” while 26 percent feel it is not yet addressed at the company level. Only a low 8 percent feel their enterprise has an analytics culture. With the foregoing in mind, it is no surprise then, given the prior reported lack of an analytics culture in most enterprises, that a strong 70 percent of survey responders gave a neutral (39 percent), dissatisfied (29 percent) or very dissatisfied (2 percent) rating to the analytics capabilities of the brand. A scant 2 percent said that they are very satisfied – while 29 percent said that they are dissatisfied.

Analytics Tools & Expertise 
Most luxury brands report that they lack analytics expertise. Only a scant minority (5 percent) reported that the brand has personnel with modern analytics training and skills, such as data science, AI, and machine learning, to execute their analytics. (In other words, companies are lacking key data architect, data scientist, and data steward professionals to help “ensure that core decision makers, such as designers, merchandising teams, and e-commerce teams, can translate data and analytics to fit business needs,” per McKinsey.) A whopping 95 percent reported that this critical need is partially addressed (56 percent) by the company or not addressed at all (39 percent).

And only 8 percent of luxury brands executives revealed that they use modern analytics tools, such as data visualization and powerful, self-service business intelligence tools, to conduct customer analytics. An overwhelming 92 percent of the responders stated that the need is not fully addressed (67 percent) or not addressed at all (25 percent).

Ultimately, data collection is an area where there is the highest level of satisfaction reported by luxury enterprises, according to the Luxury Institute. Yet, only a minority of brands reported being satisfied. Once the data is collected, most enterprises reported “systemic failures across all elements of the data management and data analytics processes and capabilities.” Qualitative responses as to how the data is used indicate that luxury brands primarily use data for “basic and rudimentary tasks, such as to measure outputs (email campaign results, total sales, etc.)” as opposed to generating high-performance inputs that “accurately define and respectfully target pinpoint, high propensity audiences.” 

First proposed over two years ago in December 2020, the Digital Services Act (“DSA”) was agreed to late last month. The European Parliament (the European Union’s legislative arm) came to an agreement in principle with the individual EU Member States to move forward with the process of finalizing the DSA, which will set forth new accountability and fairness standards for online platforms, social media platforms, and other internet content providers, depending on the entity’s size, societal role, and impact on individuals’ lives.

Broadly speaking, the DSA will counter the sale of illegal products and services on online marketplaces and aims to combat illegal and harmful content on online platforms, such as social media. At the same time, the Digital Services Act also broadly aims to increase transparency and fairness in online services.

In a similar vein to past comprehensive EU legislation, the Digital Services Act gives individuals control through transparency requirements that entities must abide by, requiring new judicial or alternative dispute mechanisms to be implemented to allow individuals to challenge content moderation decisions and/or seek legal redress for alleged damages caused by an online platform. The DSA will also require a certain amount of transparency into entities’ algorithms that are used to recommend content or products (i.e., target) to individuals.

Background and EU Legislative Process

The DSA is a small piece of a larger package of laws and regulations that have slowly made their way through the EU legislative process. One piece of that package, the Digital Markets Act (“DMA”), which was already agreed to in March 2022, focuses on regulating anti-competitive and monopolistic behavior in the technology and online platform (digital and mobile) industries. The DMA is at the forefront of a global trend that is seeing regulators look to antitrust legislation as a way to regulate technology companies and online services.

With the principal agreement in place, the Digital Services Act will now be taken up in a co-legislative manner, meaning the individual EU Member States (France, Germany, etc.) must take up and pass the DSA for full approval by the EU Council, which is made up of representatives from each EU Member state. In tandem, the European Parliament will take up the DSA for approval. Once the European Parliament and full EU Council approve the DSA, it will be finalized and effective. As proposed, the DSA will take effect on the later of 15 months after it becomes effective, or January 1, 2024.

However, very large online platforms will have a shortened timeline and must comply with the DSA within 4 months of its effective date.

There is a dearth of details of what might end up in the final version of the Digital Services Act, and the final scope and impact of the new law will not be known until the final text is released. However, the EU has provided some insight and general principles that will guide the final text. These insights can help entities that must prepare for the DSA.

Spectrum of Applicability

The DSA applies to “digital services,” which broadly includes online services, including online infrastructure, such as search engines; online platforms, such as social media; and/or online marketplaces and even smaller websites. Additionally, the DSA will apply regardless of where an entity is established. If an entity is an online service that operates in the EU, it must comply with the DSA. However, as mentioned above, the applicability of the specific requirements will depend on the size and impact the service has.

There are four categories of online services according to the DSA: (1) intermediary services; (2) hosting services; (3) online platforms; and (4) very large platforms. Each subsequent category of online service is also considered a sub-category of the preceding type of online service, meaning the requirements placed on intermediary services are also placed on hosting services, etc. Intermediary services include those entities that provide network infrastructure, with some examples being internet service providers and domain registrars. Hosting services include cloud and website hosting services. 

Online Platforms include online marketplaces, app stores, economy platforms, and social media sites. And finally, very large online platforms include those online platforms that serve and reach over 10 percent – or about 45 million – of consumers in the EU.

DSA Requirements

The requirements of the DSA are cumulative and will depend on the size and impact of a given company. In lieu of many specifics and details, entities can, nonetheless, begin preparing for the types of policies and procedures that will be required. The requirements that the EU has outlined for the proposed DSA are broken down below by the specific categories of online services.

Intermediary Services – If an entity is considered an intermediary service, it must implement policies and procedures related to the following areas: (1) transparency reporting; (2) terms of service/terms and conditions that account for defined EU fundamental rights; (3) cooperation with national authorities; and (4) accurate points of contact and contact information, as well as appointed legal representatives, where necessary.

Hosting Services – If an entity is considered a hosting service, the entity must comply with the above intermediary service requirements. Additionally, a hosting service must: (1) report criminal offenses (likely related to the sale of illegal products and services, or the posting of illegal and harmful content); and (2) increase transparency to its consumer base through notice and choice mechanisms that fairly inform the individual consumer.

Online Platforms – If an entity is considered an online platform, it will need to comply with the above intermediary service and hosting service requirements.

Additionally, an online platform must implement policies and procedures that address: (1) complaint and redress mechanism (that includes both judicial and alternative dispute remedies); (2) the use of “trust flaggers;” (3) abusive notices and counter-notices (e.g., dark patterns); (4) the verifications of third party suppliers on online marketplaces (including through the use of random or spot checks; (5) bans on targeted advertising to children and on target advertising based on special characteristics (e.g., race, ethnicity, political affiliation); and (6) transparent information on targeting and recommendation systems.

Specifically, the obligations imposed related to “trust flaggers” will require online platforms to allow trusted flaggers to submit notices related to illegal content or products and services on the given online platform. For an individual to be considered a “trusted flagger” they must meet certain certification requirements. Certification is only granted to those that: (1) have expertise in detecting, identifying, and notifying supervisory authorities of illegal content; (2) is able to exercise its responsibilities independent from the specific online platform; and (3) can submit notices to the proper authorities in a timely, diligent, and objective manner.

Very Large Platforms – Very large platforms are the most regulated sub-category of online services under the DSA. These entities must comply with the requirements set forth for intermediary services, hosting services, and online platforms. Additionally, they must: (1) implement risk management and crisis response policies and procedures; (2) conduct internal audits, and have external audits conducted, of the services; (3) implement opt-out mechanisms so individuals can opt out of targeted advertising or user profiling; (4) share data with public authorities and independent researchers; (5) implement internal and external-facing codes of conduct; and (6) cooperate with authorities in response to a crisis.

The transparency and audit requirements set forth for very large platforms will require annual risk assessments to identify any significant risks to the platform’s systems and services. The risk assessment must include reviews of the following (1) illegal content, products, and/or services; (2) negative effects on defined EU fundamental rights, especially with respect to privacy, freedom of expression and information, anti-discrimination, and the rights of children; and (3) manipulation of the services and systems that could result in negative effects on public health, children, civic discourse, the electoral system, and national security.

In addition to the risk assessment, independent external auditors will need to conduct assessments of the services and systems at least once a year. Such auditors will need to produce a written report and very large platforms will need to implement and maintain policies and procedures to remedy any identified issue.

Penalty For Noncompliance

Individual EU Member States will have the freedom to implement the specific rules and procedures for how penalties will be issued under the DSA. In the most recent draft, the DSA called for penalties that are “effective, proportionate and dissuasive” meaning the penalties imposed could be imposed where no direct damages occurred or be in excessive of any direct damages. As proposed, any entity that violates the DSA can face a penalty up to 6 percent of its annual revenue.

Looking Forward

Once implemented and effective, the DSA will set the standard for requirements related to fairness, transparency, and responsibility that online services must comply with. Entities that fall within any of the DSA’s four categories of in-scope digital services will need to begin investing resources into policies and procedures to address the various topics addressed in the DSA.

The DSA sets out a compliance deadline of January 2024, or 15 months after the DSA’s final effective date. This gives a number of entities time to jump-start their compliance efforts. However, the base compliance deadline is a bit deceptive as a large number of entities will likely fall within the very large platform category. Such entities will only have 4 months post-DSA effective date to come into compliance and cannot afford to final approval of the DSA to jump-start their compliance programs.

Lucas Schaetzel is an associate in Benesch Friedlander Coplan & Aronoff LLP’s Intellectual Property/3iP Practice Group.