As AI becomes more pervasive and lawyers are seeking guidance on how to navigate our rapidly evolving AI legal and regulatory environment, we are seeing a big increase in continuing legal education (CLE) events regarding AI and the law.

These events are often produced by law firms, bar associations, law schools, legal tech providers, general AI providers and legal industry organizations. They can be delivered in-person, online via a webinar or a hybrid approach. Sometimes there’s a fee to attend these events and sometimes they are provided free of charge. Sometimes these CLEs may qualify for formal CLE credit for lawyers and sometimes they don’t.

I think it’s good that we are seeing an increase in AI CLEs as lawyers need to become more educated about AI and lawyers have an ethical obligation to understand the benefits and risks of using technology to help serve their clients. I try to attend as many of these events so I can continue to “skill-up” on AI and to obtain different perspectives in this area.

Recently I had the opportunity to participate in two CLEs regarding AI and the law – one as a “roundtable” participant in Chicago that was coordinated by ACC Chicago and hosted by the law firm Mayer Brown and one as a speaker on an AI panel that was hosted by Baker McKenzie in Washington, DC. I really enjoyed participating in these events and I learned a lot.

As we see this large influx of AI CLEs, it can be hard to determine which ones makes most sense to attend. Here’s my thoughts on what makes a strong AI CLE:

Offer Practical Takeaways: Like most CLEs, the best AI CLEs are the ones that provide the audience with real-world best practices that lawyers can actually use to serve their clients. Those AI CLEs that focus on providing practical steps for lawyers to navigate growing AI considerations versus focusing on legal theory regarding AI offer the highest value. AI CLEs would also be well-served by providing a “leave-behind” or other materials that capture such practical takeaways.

Basics of AI: I think it’s smart for CLEs to provide a short overview of the basics of AI at the very beginning of any AI CLE- and especially since many lawyers remain unfamiliar with the fundamentals of AI technology. Those AI basics should be delivered in a super easy to understand fashion and without a heavy dosage of technology jargon so that it can be easily consumed by a legal-centric audience. Also, consider whether it makes sense to provide a very short AI demo – and be sure that your demo actually works!

Have Great Speakers: While it may be very obvious, having excellent speakers will help make your AI CLE memorable and vice-versa. I continue to see some lousy speakers at AI CLEs (and at many non-AI CLEs) as we see the growth of many so-called “AI Experts” out there. Please take the time to conduct the appropriate due diligence to secure top-notch speakers – and especially if you are charging a fee to attend.

Balanced Speakers: Your speaker slate for an AI CLE should also be inclusive so that it can represent a wide range of perspectives. Of course, this focus on inclusivity is consistent with key leading Responsible AI principles that we have been seeing recently like fairness and inclusivity. Also, please be sure to avoid any “manels.”

In-Person vs. Virtual: There are various pros and cons associated with in-person versus virtual CLEs. While in-person events can be pricey to produce and requires a fair amount of coordination of logistics, they also provide better networking opportunities for speakers and the audience. I also believe that having an appropriate venue and food/refreshments are critical in making an in-person CLE experience successful. While webinars don’t offer the same networking opportunities like in-person CLEs, they typically involve lower costs to produce, and they can be scaled to reach a much wider audience on a remote basis.

Roundtable Format: As the intersection of AI and the law continues to be in its early stages, I’m a big fan of having informal roundtable CLE sessions about AI where there may be discussion leaders/facilitators for various AI topics and roundtable participants actively contribute to the conversations. I find that this roundtable format can enable an immediate and rich sharing of ideas and best practices on AI – and especially when there’s an understanding that Chatham House Rules are in effect.

Panel Format: Having AI CLEs that are structured as being more an AI topical-focused panel(s) are also highly popular in nature. In my experience, there no should be no more than 5 people per panel in order to provide panelists with equitable opportunities to contribute to the discussion and a highly skilled panel moderator is needed to keep the discussion moving forward.

Presentation Format: AI CLEs can also be delivered in a traditional presentation format whereby a presenter delivers his/her presentation to an audience via PowerPoint slides or something similar. If you decide to go down this route for an AI CLE, please consider having relatively short presentations that are no more than 15 minutes in length and perhaps in a Ted style-talk format as the attention spans for your audience will be very limited and there’s few presenters who can capture an audience’s attention for an extended period of time.

AI Legal Ethics: Obtaining CLE credits to account for state bar requirements are an incentive for lawyers to attend CLEs. The AI legal ethics area is actively evolving as we speak. In my view, educating lawyers on how to use AI in a responsible and legally ethical fashion is a topical area that is important and is growing in-demand.

Enable Audience Participation: The best AI CLEs promote very active audience participation by providing opportunities for the audience to pose questions to speakers – whether that is done live in real-time or via some technology option that is part of any virtual webinar. Carefully consider how your AI CLE enables audience participation in some meaningful fashion for the learning benefit of everyone.

Last week (on Valentine’s Day), the leading airline Air Canada was ordered to issue a refund to a customer who was misled by its chatbot. This case has received a fair amount of attention across the internet and social media.

The facts of the case are pretty straightforward. An Air Canada customer sought to obtain a bereavement fare for travel after the passing of his grandmother. The customer relied on information provided to him by Air Canada’s chatbot that he could apply for a refund retroactively after he purchased his ticket. When he applied for a refund, Air Canada informed him that bereavement rates would not be applicable retroactively on completed travel. The customer provided Air Canada with a screenshot of the bot’s advice and then he sued Air Canada in small claims court for the fare difference.

While Air Canada said that the correct information about bereavement fares could be found on its website and also maintained that the chatbot was a separate legal entity and was therefore not responsible for is actions, the court ruled in favor of the customer.

This “Canada Chatbot” case is an interesting one. First off, if I was providing legal advice to Air Canada, I would have advised them to provide the customer with the appropriate bereavement fare refund and to provide some suitable credit for future air travel to help avoid any potential legal claim and any associated potential negative publicity.

Here’s my thoughts on the AI-specific aspects of this case:

New AI Case Law: While this is only a small-claims court case, it shows that as AI becomes more prevalent across all industries, we will also see in increase in AI jurisprudence. We need to remember that in addition to the growing applicable AI rules and regulations, relevant legal cases will also significantly impact the development of AI law. Hopefully, lawyers and judges will increasingly understand AI in order to help shape meaningful AI law.

The Rise of Chatbots: As this case demonstrates, Air Canada, like many companies, use chatbots as a digital concierge to help serve their customers and to help enable smarter utilization of their human resources. As we see a younger generation of potential customers who have grown up with texting and using apps on their smartphone and better chatbot tools powered by AI technology will increasingly be available in the marketplace, we will see even more organizations use chatbots to help address questions from their customer base. In the legal industry there are growing opportunities for legal departments to use bots to serve their business clients, for law firms to use bots to convey relevant information with their clients and for our court systems to leverage bots to improve access to justice for citizens.

We Are Our Bots: The bots that organizations use to interact with the public are really extensions of their own organizations. They serve as an organization’s agents and representatives and it will be difficult for organizations to disclaim responsibility when their bots are supplying inaccurate information that customers are relying upon – and especially when those organizations are highly sophisticated and have “deep pockets.” Organizations that choose to use chatbots also need to carefully vet and select the providers who supply the underlying AI technology.

Proactive Chatbot Oversight: When organizations use bots to serve their customers, they need to make sure the data which they “feed” to the bot is relevant, accurate and constantly updated as they cannot act in a laissez faire manner. All organizations, including legal organizations, need to continue to properly oversee and maintain their respective chatbot solutions. For legal organizations, this active oversight function is similar to what lawyers need to do from a legal ethics perspective in overseeing and managing paralegals, legal professionals and technology tools like cloud computing.

Chatbot Transparency: If legal organizations are using chatbots to interact with the public or their clients, it’s also a good idea for those organizations to drive clarity that they are not interacting with an actual lawyer when connecting with a chatbot.

Deploying chatbots as a strategy to serve customers can offer a variety of benefits. Please make sure that you are smart and responsible when deploying chatbots.

It was great to be “back in law school” this past Friday to attend the “AI and Law: Navigating the Legal Landscape of Artificial Intelligence Symposium” at Northwestern Pritzker School of Law that was produced by its Journal of Technology and Intellectual Property in Chicago.

This terrific event was spearheaded by Northwestern Law Professor Dan Linna – who is an incredible legal educator. Professor Linna is also one of the foremost legal experts regarding AI and the law. He’s highly respected, his classes prepare his law students for the practical realities of the “real world,” and I have had the good fortune to learn from him.

Professor Linna and his team put together an outstanding agenda for this event as the conference attendees were treated with valuable insights from various leaders across legal academia.

As an in-house lawyer, I really enjoy attending these law school events on important topics like AI as it provides me with an opportunity to escape my own personal and professional “echo chamber” and to learn from legal leaders who are at the cutting edge of important issues in the AI area.

After introductions by Northwestern Law Dean Hari Osofsky and Professor Linna, University of Colorado Law Professor Harry Surden kicked off the event with a keynote entitled “Advances in Artificial Intelligence and Law: ChatGPT, Large Language Models (LLMs), and Legal Practice.”

Professor Surden’s talk provided an overview of Gen AI and he shared some thoughtful observations about GenAI and GPT-4 in the slides below. For example, he talked about how these tools are reasonably good – but you need to proceed with caution. He said that GPT-4 is akin to “a very good 3rd year law student” and that interesting comparison made good sense to me. Professor Surden also warned that these current GenAI tools have various limitations and struggle with certain scenarios like the following: “complex legal reasoning,” non-standard scenarios that are out of distribution,” “complex legal reasoning,” “hard cases of subjective judgment,” and “complete accuracy and reliability.”

The next speakers were Professor Sabine Brunswicker and Professor J.J. Prescott who spoke about using AI tools for delivering legal services. Professor Brunswicker talked about using AI chatbots, the role of empathy with chatbots and that empathetic chatbots may be more helpful to users (and I did not know that chatbots could actually be empathetic – but I guess they can be programmed accordingly). Here’s an interesting slide from Professor Brunswicker’s talk:

Professor Prescott explored the ability to AI tools to improve access to justice for citizens – especially given the significant expense of lawyers nowadays. There is a perspective that some of these tools may be viewed as a form of “second-class justice” for potential litigants – however, these tools are better than having no advice whatsoever from lawyers. There was also a discussion that there are plenty of opportunities for tech/AI to explain things to others, to make a litigant feel that she/he was actually heard and that chatbots helps lower the effort for people to actually find things when compared to Frequently Asked Questions (FAQs) type of documents. Also, as we have seen in our current tech world, some folks rather use/work with apps versus engaging with humans. Likewise, not everyone may want to engage with a human lawyer. There was also an interesting point posed by an Illinois state judge in the audience about the ability for AI tools to free up time for judges and courts to perform more important tasks for citizens.

The next set of speakers addressed AI regulation and privacy issues. Professor Bryan Choi shared his thoughts as depicted in the slide below that AI regulations are often premised on standards of care and that it may make sense to have a set of “vertical” standards based on key areas like data, learning and testing.

Professor April Dawson shared her thoughts regarding the topic of “Constitutional AI and Algorithmic Adjudication.” Based on a poll of attendees, it seemed like the audience trusted AI adjudication much less than traditional human decision-making in legal contexts. Professor Dawson wrapped up her talk with this terrific slide below where she concluded with these 3 key observations: (1) change/disruption is here; (2) lawyers need to understand this transformative AI technology; and (3) legal education needs to better educate law students. In fact, I think this slide nicely summarized the major takeaways from the conference.

While the next speaker was Professor Charlotte Tschider, I missed Professor Tschider’s talk as I needed to attended a work conference call.

After a lunch break, there was another keynote presentation. This keynote was delivered by Professor Pamela Samuelson on the important topic of the intersection of copyright law and AI, and it was appropriately entitled “Generative AI Meets Copyright.” Professor Samuelson delivered a very insightful presentation on this very important topic.

The final speakers provided their unique perspectives regarding AI and intellectual property. As an in-house lawyer, I appreciated Professor Nicole Morris’s practical suggestion below to avoid a situation similar to which happened to Samsung last year where some of their employees accidentally leaked company trade secret information to ChatGPT.

I’m really glad I invested the time to attend this excellent event as I learned a lot and I was able to network with so many smart lawyers, legal professionals and law students. The law students who have the ability to learn from the outstanding law professors who spoke at this event are super lucky!

Last year the internet went into a frenzy after pictures like the one above of Pope Francis wearing a puffer jacket were circulated. These images were created by generative AI and of course it demonstrates the powerful nature of AI technology.

The Pope has also been proactive in stressing the importance of ethical AI. It’s very significant to see a person of his great influence and stature highlight the importance of responsible AI.

Recent news reports indicate that the Pope and the Vatican rely on an AI expert named Friar Paolo Benanti to help shape their thinking on AI issues and to serve as their resident AI expert. The Associated Press recently reported about Benanti’s role in the AI area as well as an article in The New York Times. As a result, Benanti is gaining increasing notoriety across the globe for his visible leadership in advising the Pope and the Vatican on AI.

The Pope and the Vatican are very smart to enlist the help of an AI specialist to help guide them on the myriad of AI-related issues that we will be seeing in a growing AI world. In fact, I think here’s some lessons that the legal industry can learn from the fact that the Pope has an “AI Lead”:

Understanding the Benefits & Risks of AI: As part of a lawyer’s ethical duties when using technology to serve their clients, lawyers need to understand the potential pros and cons associated with using that technology – including AI. Law firms and legal departments would benefit from having AI-focused people on their teams – or as consultants – to educate them about AI, keep up to speed on the ever-changing AI tech and regulatory landscape and so they can be well-positioned to use AI tools in a responsible manner. Obtaining consistent help from an AI expert(s) would be both a smart business and legal decision for all legal organizations.

The Rise of AI “Chiefs”: As AI technology becomes more universally embraced, we will also see legal organizations be more focused on AI governance and hire AI-centric roles such as a Chief AI Officer, Chief Data Scientist, Chief Responsible AI Officer or similar senior AI lead roles. These senior AI roles will also have an important “AI Ambassador” component to them.

Multi-Disciplinary Skills: To be successful in a role of an AI Lead, that person needs to have very broad subject skill sets in key areas like technology, business, data, legal, compliance, privacy. etc…and effective ways of working skills like being a clear communicator, an effective collaborator, change management, embracing empathy, etc…

If the Pope and the Vatican have made the wise decision to invest in an AI leader so they can better serve their large community, legal organizations should also explore whether it makes sense to identify and secure the appropriate AI talent for their respective organizations so they can better serve their clients in a responsible and ethical fashion.

A must-read report entitled “Generative Artificial Intelligence and the Workforce” was released last week by The Burning Glass Institute and SHRM. This report was also featured in an article in The New York Times.

This report is interesting for the legal industry since it shares the following observations regarding the potential impact of GenAI in the legal industry:

  • Regulatory Compliance: “Examples of how AI will place certain occupations at high risk include:..Regulatory compliance, a task overseen by auditors, compliance officers and lawyers, demands thoroughness and accuracy. GenAI can facilitate quicker compliance checks with fewer errors.” (Page 5)
  • Repercussions by Industry: “The industries most likely to be affected include financial services, law and marketing research. For example, legal advisors face potential automation in creating standardized documents…” (Page 6)
  • Legal Occupation Affected by Gen AI: Legal was identified as a key occupation most affected by GenAI compared to previous automation waves. In fact, law offices were ranked with the second highest “GenAI Exposure Score” of 3.906 for occupations right behind mortgage and nonmortgage loan brokers. (Pages 13 and 16)

Of course, only time will tell regarding the true impact of AI upon the legal profession.

While I don’t believe that AI will be replacing lawyers anytime soon, there is no doubt that sophisticated AI tools will be able to perform and automate certain tasks – especially routine and repetitive ones – that have been traditionally performed by lawyers, paralegals and other legal professionals.

A key take-away from this report is that the legal profession needs to be open to learning more about AI and embracing AI tools to better serve their clients so that lawyers can practice law at the top of their law license.

I’m also adding this interesting graphic below from the report which provides a summary of workforce skills that will increase/decrease in importance with the rise of GenAI tools.

Hopefully our law schools will be teaching some of the skills identified above that will be increasingly important for lawyer success as AI technology advances and lawyers use more AI solutions to deliver legal services to their clients. Key skills like “AI Literacy,” “Emotional Intelligence,” “Continuous Learning,” “Critical Thinking,” “Digital Security and Privacy,” and “Creativity” will be even more critical for lawyers to invest in and build upon moving forward in an AI-powered world.


The annual Data Privacy Day 2024 was earlier this week on Sunday, January 28th. I originally thought that the data privacy people worked with the people at Hallmark to help make a Data Privacy Day. In any event, I didn’t receive a card in the mail to commemorate Data Privacy Day!

Of course, as we move to an AI-powered world that is heavily reliant on massive amounts of data to help train AI algorithms that are the backbone for AI solutions, data protection has become even more important.

With the proliferation of AI solutions available in the marketplace, this means that you have to take the time to conduct the necessary due diligence on an AI provider to make sure it can earn your trust – and especially to clearly understand how your data will be protected by the provider.

So, when you are working with AI providers, it is more important to pay attention to the contract terms that apply to their use of your data, including whether it is used to train models, whether it can be commingled with data from other customers, and the level of privacy and security that will apply to its storage.

For example, the graphic above from the article referenced in the following sentence highlights some of the key data privacy features of Microsoft Copilot for Microsoft 365 AI solution that was made generally available to the marketplace this past November – and which I use every day to save time and be more productive. In addition, more data privacy and data security information about Microsoft Copilot is contained in this article.

It’s also important to keep up to speed on new major legislation regarding AI that will have data privacy implications, including the upcoming EU AI Act. And here in the US, while it may be a stretch for us to see a new comprehensive US Federal AI law anytime in the near future, individual US states may follow suit with new and supplemental data privacy laws as AI technology becomes increasingly prevalent.

Since AI technology will continue to rapidly advance and the huge amounts of data to fuel this technology becomes even more important, every day is Data Privacy Day!

While I am not a “Swifty,” I’m a very big fan of the National Football League (NFL). Football fans like myself have been recently seeing a lot of highly popular singer-songwriter Taylor Swift on national TV since she’s been dating an NFL player named Travis Kelcie of the Kansas City Chiefs (and the Chiefs are an excellent football team), she has been attending Kelcie’s games and the TV networks televising Chiefs games routinely show her rooting for Kelcie and the Chiefs from her private stadium seats.

Of course, as a very high-profile celebrity whose ongoing “Eras Tour” concert series has been wildly successful, Ms. Swift has historically been subject to lots of media scrutiny. Nowadays, with the rise of sophisticated AI tools, celebrities like Ms. Swift are unfortunately also very susceptible to something known as “deep fakes” where AI technology is used to make false videos, voices or images of a person, event, situation, etc…Often times, these deep fake videos, voices or images can seem to be very plausible and convincing in nature and lead to disinformation.

For example, earlier this month there was a deep fake advertisement featuring Ms. Swift endorsing cookware products. Much more troubling was a very recent report that fake AI-generated highly inappropriate sexually explicit images of Ms. Swift were circulating on the internet.

As we begin an important year of elections across the world (and of course our Presidential election in the United States) there are also big concerns that the rise of deep fakes can result in an increase in disinformation for voters. For instance, earlier this week prior to the Republican primary in New Hampshire there were reports of fake robocalls from President Biden to discourage voters.

Deep fakes should be a concern for everyone as we all have a risk that our respective names, images, likeness and reputations could potentially be inappropriately manipulated by AI technology. It also seems that the more we put ourselves out on social media and post content, the greater likelihood that our names, images, likeness and reputations can be negatively impacted by AI tools.

So how do we try to contain the rising spread of disinformation through deep fakes which has been fueled by the growing prevalence of AI technology? Here’s some possibilities:

  • Watermarking Mechanisms: There’s a view that using digital watermarks to help identify genuine online content is a good way to push back against deep fakes. President Biden’s October 30, 2023 Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence and the Voluntary AI Commitments that the Biden Administration secured earlier in 2023 from various technology companies recognize that using watermarks is a best practice. However, watermarks are not a “silver bullet” in stopping deep fakes. This recent article by the Electronic Frontier Foundation goes into detail as to why we cannot so conveniently rely on watermarks and states the following near the end of the article: “Watermarking of AI generated content is an easy-sounding fix for the thorny problem of disinformation.” While watermarking technology may not stop deep fakes, as technology continues to advance, hopefully we will see other types of new digital tools that can deter deep fakes.
  • US Federal Laws: While there has been lots of activity in educating Congress about AI and discussion about some bipartisan AI legislation, I’m not sure that we will see any comprehensive US Federal AI legislation in the very near future. However, there may be opportunities for Congress to pass some legislation which is focused on deep fakes as there seems to be growing momentum in this area and a bill was introduced in 2023 known as the AI Labeling Act.
  • US State Laws: During 2023, Minnesota, Michigan and Washington enacted laws to combat deep fakes. Many other states have gotten off to a “fast start” in 2024 to introduce new legislation in the deep fakes area.
  • Laws Outside the US: The eventual comprehensive EU AI Act will provide some regulation on deep fakes. When the AI Act does come into effect, I expect some countries to learn from it and potentially enact local laws which may be similar in nature. Disinformation has also become a significant issue in Brazil as voter misinformation was rampant during its 2022 Presidential election. As a result, Brazil has considered adopting Bill No. 2630 which is also known as the “fake news” law.
  • Active Enforcement of Laws: Of course, for any laws in this space to have impact, it needs to be enforced by applicable regulatory authorities. As an example, it was a step in the right direction to see the Federal Communications Commission issue a recent unanimous ruling to ban the use of AI-generated robocalls.

AI-powered deep fakes are a growing and highly serious concern that leads to misinformation and can damage individuals. Thoughtful action by our respective governments, the technology industry and our society will be required to help contain the rise of deep fakes.

Three weeks ago, on New Year’s Eve, the 2023 Year End Report on the US Federal Judiciary was issued by US Supreme Court Chief Justice John G. Roberts, Jr.

I find it interesting from a marketing perspective that this annual report is released on the very last day of the year when most folks, even lawyers, are focused on other important activities such as celebrating a new year!

While this report is 13 pages in length, the first 7 pages are a must-read for lawyers and legal professionals as Chief Justice Roberts highlights the growing importance of artificial intelligence (AI) in the legal industry.

Here’s some key takeaways from the report:

  • Big Focus on AI: The report points out that “The legal profession is, in general, notoriously averse to change.” As a result, many court systems across the United States have not taken advantage of leveraging leading technology to help enable judges, lawyers and litigants. The fact that this report from the highest court in our country devotes a significant portion of its content to a discussion about AI demonstrates that the legal profession should not be ignoring AI. Instead, we need to learn more about AI, understand its growing impact on the delivery of legal services and be proactive in shaping an AI-infused society.
  • AI is a Tool and a Weapon: The report recognizes both the potential benefits and challenges with AI solutions. The report states, “AI obviously has great potential to dramatically increase access to key information for lawyers and non-lawyers alike. But just as obviously it risks invading privacy interests and dehumanizing the law.” The reality is that every form of technology – whether it was the advent of the personal computer, the rise of the internet, the growth of social media, the smart phone revolution, cloud computing or the rise of online gaming – all has pros and cons. A book written in 2019 by Microsoft Vice Chair and President Brad Smith and Carol Ann Brown entitled “Tools and Weapons,” does an excellent job at outlining the opportunities and risks with technology in our digital age.
  • Judgment, Judgment & Judgment: As a big fan of tennis, I love that the report uses this example of tennis to drive home the critical point of the continued importance of lawyer’s using their judgment skills as AI becomes increasingly relevant: “Many professional tennis tournaments, including the US Open, have replaced line judges with optical technology to determine whether 130 mile per hour serves are in or out. These decisions involve precision to the millimeter. And there is no discretion; the ball either did or did not hit the line. By contrast, legal determinations often involve gray areas that still require application of human judgment.” Moving forward, lawyers will need to be even better at honing and using their judgment and smart risk-taking counseling skills for the benefit of their clients. Skills like excellent judgment, effective communication, strong advocacy, constant collaboration, relationship building and leveraging leading technology tools, will be even more important for lawyers as AI technology becomes more pervasive.

As we begin 2024 and lawyers increasingly use AI solutions to serve clients and our clients increasingly seek help from lawyers on AI matters, here’s 3 best practices for lawyers:

✅️ Skill-Up on AI: Lawyers need to understand the potential risks and benefits in using AI tools. We need to learn as much as we can about AI solutions and a rapidly evolving regulatory AI environment.

✅️ Legal Ethics & AI: Similar to how many US state bar organizations have issued many legal ethics opinions on cloud computing solutions over the past several years, we will soon see legal ethical opinions on AI from various states. Be sure to learn and comply with those opinions from the states where you are admitted to practice law.

✅️ Trust & AI: Remember that trust cannot be claimed. Trust must be earned. Conduct appropriate due diligence when evaluating potential AI providers and only select responsible AI providers and solutions that you can truly trust.

During my legal career, I have had the good fortune to deliver legal support to some of the best marketing and sales enablement professionals in the world.

One key thing they have taught me is the importance of building and periodically evolving your brand.

Over the past 5 years I have authored 50+ blog posts under the “In-House Consigliere” name/brand that have been largely focused on sharing my observations from an in-house legal perspective. “Consigliere” is an Italian word which means “counselor” or “advisor” and it was made famous by the Tom Hagan role portrayed by the actor Robert Duvall in the 1972 movie “The Godfather,” where Tom Hagan served as the consigliere to the mafia boss Don Corleone. While in-house counsel should avoid being a member of a criminal syndicate, they should view themselves as trusted consiglieres to their clients.

The time has now come to pivot my blog. Since I have a passion for the growing area of artificial intelligence (“AI”), AI is a multi-disciplinary area in the law and there’s great opportunities for lawyers to positively impact the AI space, moving forward I plan to share my personal thoughts at the intersection of AI and the law under a new blog name/brand entitled, “It’s AI All the Time.” My hope is to provide a practical perspective about AI for the legal profession which is built upon my 30+ years as an in-house lawyer for great technology companies like Microsoft, Accenture and IBM.

I invite you to please read my periodic posts on AI, subscribe to my blog and provide me with any feedback.

Thank you in advance for your support and I’m excited to share my content about AI and the law.

This has been a tumultuous week in the football world as two of the all-time greatest coaches have stepped down from their jobs.

After 17 seasons and 6 college football championships, Nick Saban announced that he was retiring from being the football coach of the Alabama Crimson Tide.

Soon afterwards, Bill Belichick left the NFL’s New England Patriots. Belichick had been the coach of the Patriots for 24 seasons and won 6 Super Bowl titles.

Here’s a “vintage” photo above of a young Belichick and a young Saban when they both coached for the Cleveland Browns back in the 1990s.

While I think we will inevitably see these coaching icons in other leadership positions on the gridiron, here’s 3 leadership lessons that we can learn from them:

🏈 Learn from Your Mistakes: Before becoming legends at Alabama and New England, both Saban and Belichick had coached elsewhere with lackluster results. Their coaching struggles early in their careers and their abilities to adapt and learn provided them with the foundation for their eventual sustained coaching success.

🏈 Grow Top Talent: Both of these icons had the ability to identify, secure, develop and retain top talent on the football field. In turn, they created “destination organizations” where the best players and best assistant coaches wanted to play and coach at Alabama and New England.

🏈 Be Transparent & Drive Clarity: Saban and Belichick have always driven precise role clarity and exacting high expectations for their players and coaches. This transparency enabled them to build high performing organizations.

What else can we learn from quite possibly the greatest of all-time AKA G.OA.T. 🐐 coaching legacies of Saban and Belichick that can be applied to the workplace?