Review of “Generative AI in Legal 2024"

The Future of LawFeb 02, 2024

Commissioned Report By: Ryan O’Leary, Esq., IDC
Commissioned by: Relativity
Date: November 2024

Access the Report Here

About the Reviewer: Marlon Hylton

Marlon Hylton is the founder of Innov-8 Data Counsel & Innov-8 Legal Inc., where he pioneers the integration of legal expertise and business strategy with advanced AI and technology. A former partner at a leading Canadian law firm, Hylton is widely recognized—earning accolades from Best Lawyers and Who’s Who Legal—for his groundbreaking work in e-discovery, information governance, and AI-driven legal transformation. He is currently a student in the Advanced Certificate in Management, Innovation, and Technology at the Massachusetts Institute of Technology (MIT) Sloan School of Management, where he explores the intersection of emerging technology and legal innovation.

I. Conclusion

The Generative AI in Legal 2024 report provides valuable insights into AI adoption in legal practice, highlighting key trends and challenges. However, its lack of methodological transparency, regulatory depth, and real-world case studies limits its practical applicability. Legal professionals should use it as a starting point but supplement it with deeper research and real-world analysis to develop effective AI strategies.

I. Executive Summary

The Generative AI in Legal 2024 report provides a data-driven analysis of the impact of generative AI (GenAI) on legal practice, including adoption trends, governance, and associated challenges. The study is based on a survey of 300 legal professionals across North America, Europe, and Asia-Pacific (APAC), encompassing law firms, corporations, and government agencies.

The key findings of the report include:

  • AI adoption in legal work has increased by 50%, with an additional 43% growth projected over the next two years.
  • Paralegals and legal operations professionals are adopting AI at higher rates than lawyers.
  • Document review is the most trusted AI application (89% of respondents are comfortable using AI for it).
  • AI governance, security, and bias remain major concerns, with 69% of legal professionals worried about keeping up with AI’s rapid evolution.

While the study presents valuable insights, it could be strengthened in key areas:

  • Transparency in methodology—The report does not sufficiently disclose how respondents were selected, potentially leading to sample biases.
  • Regulatory discussion—While the study acknowledges compliance concerns, it does not explore AI-specific legal frameworks in sufficient depth.
  • Real-world case studies—The study lacks qualitative examples demonstrating practical AI adoption challenges and successes in legal practice.

II. Strengths of the Study

1. Comprehensive Analysis of AI Adoption Trends

The report offers a well-structured overview of how legal professionals are using AI, particularly in:

  • Document review (most widely accepted use case).
  • Contract analysis and legal research (key areas of early adoption).
  • Compliance and risk management (growing adoption but with continued caution).

The study also provides regional insights, highlighting that:

  • North America and Europe prioritize AI for automating low-level tasks.
  • APAC respondents express higher concern over AI’s impact on data privacy and governance.
  • Large firms (1,000+ employees) are leading GenAI adoption, with a nearly 50% increase in usage.

These findings align with broader industry trends where AI adoption first focuses on efficiency gains before progressing to higher-order legal tasks.

2. Balanced Acknowledgment of AI Risks

Unlike some industry reports that overhype AI, this study realistically assesses its limitations, addressing concerns such as:

  • AI hallucinations—False or misleading outputs.
  • Data bias and explainability issues—Cited by 36% of respondents as a key concern.
  • Security risks—Including the loss of sensitive legal data (41% concern rate).
  • Regulatory compliance uncertainty—Legal teams remain uncertain about AI governance best practices.

This balanced approach enhances the credibility of the study and ensures that it resonates with legal professionals navigating AI’s risks.

3. Robust Discussion of AI Governance and Organizational Preparedness

The report highlights AI governance as a major challenge:

  • 69% of legal professionals worry about keeping pace with AI advancements.
  • Legal and IT departments share governance responsibilities, but no clear leader has emerged.
  • 73% of organizations are investing in AI training, though technical expertise remains a barrier.

These findings reflect broader industry struggles in balancing AI innovation with legal and ethical safeguards.

III. Weaknesses and Limitations

1. Insufficient Transparency in Survey Methodology

While the study discloses the sample size (300 respondents) and timeframe (July and October 2024), it lacks critical methodological details:

  • How were participants selected? If AI-enthusiastic firms were overrepresented, this could skew results.
  • Was there segmentation by firm size, practice area, or level of AI integration? The absence of such breakdowns limits the generalizability of the findings.
  • No control group for comparison—Insights on firms not adopting AI would provide a stronger contrast.

Without addressing these gaps, readers should be cautious when extrapolating the report’s conclusions across the entire legal industry.

2. Lack of Real-World Case Studies

The study relies heavily on self-reported survey data but does not include qualitative case studies to validate its findings. This limits its practical applicability because:

  • There is no demonstration of actual efficiency gains or cost savings from AI adoption.
  • It does not document challenges faced by firms implementing AI into their workflows.
  • There are no examples of AI’s impact on legal decision-making.

Incorporating case studies from law firms, corporate legal teams, or government agencies would significantly enhance the study’s credibility and make it more actionable for legal professionals.

3. Insufficient Depth in Regulatory and Compliance Risks

The report correctly identifies AI governance as a major concern but does not fully explore existing and emerging legal frameworks, such as:

  • Data privacy laws (GDPR, U.S. state laws, APAC regulations).
  • AI-specific regulations (GDPR, Canada’s PIPEDA and Bill C-27, U.S. state laws and Executive Orders, APAC regulations).
  • Ethical risks in AI’s impact on solicitor-client/attorney-client privilege and case law interpretation.

Given the increasing regulatory scrutiny on AI, a deeper discussion on legal compliance strategies would improve the report’s utility.

4. Over-Reliance on Self-Reported Data

Survey-based insights are valuable, but self-reported AI adoption rates can be unreliable because:

  • Respondents may overestimate AI adoption due to industry hype.
  • Actual AI implementation may be limited to narrow use cases, rather than substantive legal tasks.
  • Economic incentives may distort responses—vendors promoting AI may have biased perspectives.

The study would be more robust if supplemented with independent AI adoption statistics from legal software providers.

Final Verdict:

The report provides a valuable high-level overview of AI’s role in legal practice, but lacks depth in regulatory, ethical, and real-world implementation challenges. Legal professionals should supplement this study with additional research on governance, compliance, and AI integration strategies.

The data revolution has changed law. You need a partner that has kept up.