David Elkins |
Lauryn Durham |
Nicole Brenner |
Emerging technologies are transforming the games we know and love. From player recruitment to athlete recovery, AI's integration into sports is opening doors for optimized performance and real-time risk analysis.
As more teams and organizations employ the use of AI, counsel should be aware of the ways their clients can and do leverage AI. A critical eye should focus on the relevant legal issues, including both state and federal regulatory developments, data privacy concerns, and how AI tools could potentially affect applicable intellectual property rights.
Player Recruitment and Performance
Recruitment Data
Recruiting the best talent is important in any organization — particularly so for the future success of professional sports teams.
Statistics have long been used to evaluate player performance and support contract negotiations. Machine learning algorithms are now being used to aggregate data, compare contracts and evaluate prospects.
More and more teams are using AI to analyze large sets of data and replace more subjective player evaluations, eliminating research time and avoiding travel costs, potentially making evaluations more reliable and consistent.
In professional soccer, some teams have used a data-driven platform called AiScout, which allows teams and prospective players to hold virtual trials during the recruitment process.
The app takes in individual statistical data, including biomechanics and technique, to analyze and evaluate potential performance in real time. AiScout is even an official partner of several teams, including Premier League team Chelsea Football Club and Major League Soccer's New York Red Bulls.
AI also offers revolutionary predictive powers, allowing teams to predict which players have higher probabilities of success in the next season. For example, some National Football League teams use Probility AI to assist in recruiting activities.
Probility AI uses a combination of exploratory and reproductive data modeling to inform recruiters about the risks of adding players to the roster. Through the use of public and private organization data, Probility AI can predict — with a purported 96% accuracy — which players will miss playing time due to injury or other health risks affecting their recruitment profile.
Performance Analytics
Sports teams regularly use predictive analytics through wearable devices to analyze and improve athlete performance. Major League Baseball, for example, uses the tracking system Hawk-Eye.
The system captures over 40 terabytes of data each season, in addition to data from Statcast, which was acquired by Google Cloud in 2020. The system provides teams with insight by assessing player movements, pitch velocity, and launch angles by installing a set of cameras and radar systems at all 30 MLB stadiums. Pitch by pitch, teams use the outputs to make game-time decisions.
The NFL partnered with Amazon Web Services to create the Digital Athlete — an AI tool that enhances player safety. The technology aims to help keep players healthier and safer by preventing and predicting future injuries.
It uses algorithms that consider equipment, speed, weather and hours of video to improve players' understanding of injuries. Teams can also employ this technology to create individualized training and recovery regimens to improve player performance.
The National Basketball Association also uses similar AI technologies to proactively assess players' weaknesses. In 2023, the NBA partnered with Second Spectrum to improve team performance through 3D location monitoring throughout games. The tracing system applies machine learning techniques to help players improve their game.
Legal Rules of Play
While the above shows that AI can provide benefits, its use can trigger legal implications, particularly when personal information is collected and disclosed to third parties.
The many innovations have prompted members of federal and state legislatures to learn more about AI and outline legislation to regulate it.
On Oct. 30, 2023, President Joe Biden issued the landmark Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence to set the stage for a robust, reliable framework for managing AI risks in the U.S. The order highlights eight principles.
1. Safety and security: Promote the development and implementation of repeatable processes and mechanisms to understand and mitigate risks related to AI adoption, including with respect to biosecurity, cybersecurity, national security and critical infrastructure.
2. Innovation and competition: Compel actions to attract AI talent to the U.S., understand novel IP questions, protect inventors and creators, and promote AI innovation, including at startups and small businesses.
3. Worker support: Research and develop potential mitigations against AI adoption disruption to the workforce.
4. Consideration of AI bias and civil rights: Equity and civil rights considerations for use of AI in the criminal justice system and the administration of federal government programs and benefits.
5. Consumer protection: Enforce existing, technology-agnostic authorities in an effort to minimize harms to consumers, and to identify needed authorities related to AI.
6. Privacy: Evaluation and mitigation of privacy risks associated with the collection, use and retention of user data.
7. Federal use of AI: The U.S. Office of Management and Budget is to establish an interagency council to coordinate AI use by federal agencies and develop guidance on AI governance and risk management activities for agencies.
8. International leadership: The U.S. should be a leader in AI development and adoption by engaging with international partners.
Cities are also enacting local legislation aiming to bring more transparency to the use of AI. For example, New York City Local Law 144, effective in July 2023, prohibits employers from using an automated employment decision tool to make employment decisions, unless the tool is audited for bias annually. The law represents an expansive effort to broaden the scope of AI regulation in the human resource department.
Counsel should further understand the significant data protection, privacy and IP issues related to the use of AI technologies.
Data Protection and Privacy
AI poses important privacy and security risks. The challenges in assessing such risks are enhanced by the constantly changing nature of data privacy frameworks around the globe and throughout the U.S.
Most AI-related laws in the U.S. under consideration or already enacted are parts of larger, comprehensive privacy laws. Any organization using AI will need to put policies in place to comply with the applicable federal, state, and international privacy laws and regulations for data protection.
Put more simply, organizations utilizing AI need to take full stock of — and clearly disclose, obtaining informed consent if needed — the nature of all personal data collected and its uses.
Intellectual Property
In addition to data protection, counsel should also be aware of the IP issues at play in the context of AI. Understanding the relevant IP is crucial when using technologies owned, created, or protected by others, and when a party wants to protect its own AI tools.
AI systems may help generate new works of authorship, such as software. But the U.S. Copyright Office only offers protection to original works of authorship created by humans. Some works containing AI-generated material, however, may still be copyrightable in certain circumstances.
For example, if a human arranged AI-generated material in a sufficiently creative way such that ''the resulting work as a whole constitutes an original work of authorship," copyright protection may be available for the human-authored aspects of the work, according to Title 17 of the U.S. Code, Section 101.
Unresolved issues continue to exist regarding copyright in the context of generative AI — a subset of AI that generates some media, such as text, images, etc., in response to a user-supplied prompt based upon models that have been developed through analyzing incredibly vast amounts of data.
Courts and the Copyright Office are confronting the question of whether AI-generated outputs may be copyrightable or potentially infringe on other works. Many generative AI tools are based on large language models that ingest text, images and works from the internet at large.
A raft of copyright infringement lawsuits have challenged the incorporation of copyrighted material into and use by such language models. AI tool makers tend to rely on the fair use doctrine to establish the absence of infringement.
AI inventions may be patentable so long as the inventor driving the technology is a human being. In the 2022 case Thaler v. Vidal, the U.S. Court of Appeals for the Federal Circuit Court held that only natural persons can be named inventors on U.S. patents.
Thaler claimed that his AI machine generated patentable inventions and thus was the inventor, but the court did not agree because in "the Patent Act, 'individuals' — and, thus, 'inventors' — are unambiguously natural persons."
Organizations can and do rely on trade secret protection to safeguard AI innovations. While trade secret protection is weaker than patent protection, it is also more flexible, does not require registration and does not expire in the absence of disclosure.
It arises when a company takes reasonable measures to protect its proprietary information. Measures may include physical barriers, technological measures and various agreements.
The Future of AI in Sports
AI capabilities for the sports industry are only expanding. This major technological shift invites new opportunities for improvements for all sports, in the ways described above and beyond.
By leveraging AI technologies, while complying with applicable laws and regulations, teams can take advantage of an exciting new digital era to achieve more positive outcomes — inside and outside of the game.
Stepping into the future, let's not forget that human factors are still at the core of team sports and are critical to this industry. The industry needs to balance data-driven algorithms with subjective factors, like human spontaneity and unpredictability, to keep sports and games exciting in the future.
As with any rapidly evolving technology, AI's regulatory and legal frameworks are moving at an extraordinary pace. Protecting players while staying compliant with legal and regulatory requirements is an important balance to consider as this exciting area continues to develop.
David Elkins is a partner and leads the global intellectual property and technology practice group at Squire Patton Boggs LLP.
Lauryn Durham is an associate at the firm.
Nicole Brenner is a law clerk at the firm.
The opinions expressed are those of the author(s) and do not necessarily reflect the views of their employer, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.
For a reprint of this article, please contact reprints@law360.com.