Graham H. Ryan is a partner in the Litigation Practice Group and a member of the commercial...
Dave Scriven-Young is an environmental and commercial litigator in the Chicago office of O’Hagan Meyer, which handles...
Published: | May 7, 2024 |
Podcast: | Litigation Radio |
Category: | Legal Technology , Litigation |
Generative AI is everywhere, including inside many law practices, creating documents, analyzing data, researching, and creating content. But what happens when AI makes a mistake? Who’s liable? The implications could be enormous, are we may see a new wave of liability litigation. It’s a brave, new world without a lot of legal guardrails.
There are existing, vague, nonspecific “tech” and business practice laws, then there emerging – AI specific – laws that aren’t backed by a lot of precedent from the courts. Privacy laws, data usage, intellectual property laws, even state and federal rulings, have created a jigsaw puzzle for lawyers to apply in the AI space.
Guest Graham H. Ryan is an experienced litigator and an author and frequently quoted commentator on tech issues and the emergence of generative AI and its consequences.
Ryan says it might help to look back to the “wild west” of the Internet’s earliest days and the infamous “Section 230” that has helped decide who’s liable for what when it comes to digital content. How far does Section 230 go when AI starts creating content instead of simply searching for it? And when does Congress get involved? Proceed with caution.
Resources:
International Association of Privacy Professionals, IAPP
IAPP “AI Governance Global 2024” conference
“The AI Industry Is Steaming Toward A Legal Iceberg,” Wall Street Journal quoting Graham Ryan
Harvard Journal of Law and Technology
Moody v. NetChoice, LLC, via SCOTUSblog
“Sanctions for Fake Generative AI Cites Harm Clients,” Bloomberg Law Analysis
“The EU Artificial Intelligence Act”
“Schumer Launches New Phase In Push For AI Bill,” Politico
American Bar Association Litigation Section
Special thanks to our sponsor ABA Section of Litigation.
Dave Scriven-Young:
Hello everyone and welcome to Litigation Radio. I’m your host, Dave Scriven-Young. I’m a commercial and environmental litigator in the Chicago office of Peckar and Abramson, which is recognized as a largest law firm disturbing the construction industry with 115 lawyers in 11 offices around the us. On this show, we talk to the country’s top litigators and judges who discover best practices in developing our careers, winning cases, getting more clients, and building a sustainable practice. Please be sure to subscribe to the podcast on your favorite podcasting app to make sure that you’re getting updated with future episodes. This podcast is brought to you by the litigation section of the American Bar Association. It’s where I make my home in the A BA. The litigation section provides litigators of all practice areas, the resources we need to be successful advocates for our clients. Learn more at ambar.org/litigation.
We all know that AI can be extremely helpful in supporting our practices, helping with drafting, assisting in document review, assisting with marketing among other applications. But what happens when your firm or your client use AI to produce content, make decisions, or influence the lives of others? And it turns out that AI made a mistake. Who is liable? Well, the implications could be monumental under our laws that govern liability for harmful speech and defective products. Could this create a flood of lawsuits? Well, our guest today will help us answer those questions. Today’s guest is Graham Ryan Graham’s a Harvard Law published author of generative AI, will break the Internet beyond Section two 30 and the Harvard Journal of Law and Technology. He also holds an international designation as an artificial intelligence governance professional from the International Association of Privacy Professionals. He’s a partner at Jones Walker LLP handling complex commercial and appellate litigation, and has been quoted on AI legal risk in publications such as the Wall Street Journal. Graham, welcome to the show.
Graham H. Ryan:
Hey, Dave, great to be on.
Dave Scriven-Young:
So tell us more about your international designation and just generally how you got into privacy and a practice of ai.
Graham H. Ryan:
Yeah, so I guess my journey really started back in law school as it does for a lot of lawyers. I had the pleasure of serving as the first online editor for our law review and we undertook the task of developing our law school’s first online companion to the print edition of the Law Review. And so that was really my first foray into the tech space and I decided to of course, write a law review article on a tech topic. And fast forwarding a couple of years when I was a young associate, there was actually a case in federal court that dealt with the exact topic it had to do with tech and the First Amendment. And I got a call and said, look, you wrote the law review article on this, can you help us out? And that was really my first professional experience in jumping into the tech law space. And since then I’ve had a very broad based commercial and appellate practice and state court federal court regulatory proceedings, and the building blocks have stacked up over the years with various clients who have had issues in the tech space. I’m not a data scientist, and so I’m always trying to learn more about the technical components of how a lot of the emerging technologies work, and that of course facilitates a better practice in the Courtroom. So that’s kind of the short story on how we got here today.
Dave Scriven-Young:
Got it. And just more information about the IAPP International Association of Privacy Professionals. They assist with helping educate lawyers and other professionals on privacy issues including ai.
Graham H. Ryan:
That’s right. And they have a lot of different tools, modules, a lot of learning opportunities, and they also provide certain certification and designations for non-lawyers and lawyers alike. And I thought it would be a helpful opportunity to dive a little bit deeper into the AI space, understand the life lifecycle in design development deployment of AI systems, and then of course the existing and emerging patchworks of law that might apply in the AI space.
Dave Scriven-Young:
So let’s do that deeper dive into AI legal risk. Can you give us a little bit of a broad overview on that potential risk including the section two 30 concept?
Graham H. Ryan:
Sure, Dave. So as you mentioned, this is an incredibly exciting time for individuals, businesses, anyone that’s using or benefiting from ai, really impressive tools, a lot of different use cases and a lot of potential and promise. But with that potential of course comes a healthy dose of uncertainty and that includes legal uncertainty and legal risk. I was looking at a pupil the other day for 2024 where 52% of US adults are actually more concerned than they are excited about the use of AI in the future. And only 10% were more excited than they were concerned. And so we’ve kind of got the two sides of the coin, the potential and this concern and kind of the subset of that is of course legal risk in our world. And I think a lot of where we are right now in 2024 with the onset of AI has parallels to the nineties when the internet was rapidly developing.
This was back when A OL Reign Supreme, I don’t know if you can still hear your dial up modem dialing up to log on to a OL instant messenger, but there are a lot of parallels to the nineties. And that was back when the internet user population worldwide was not 5.4 billion like it is today, but it was only 40 million. And in the nineties, society was confronted with much of the same issues we’re dealing with today, which is we have a once in a generation if that technological advancement that we know is going to fundamentally alter the way that we communicate. We access information, we conduct commerce, everything, and we want to nurture and harness that technology, but do so in a thoughtful and responsible way. And what Congress did in the nineties was they said, look, we want to preserve the vibrant and competitive free market that exists for the internet unfettered by state and federal regulation.
And so in 1996, Congress enacted what’s come to be known as the 26 words that created the internet. It’s section two 30 of the Communications Decency Act. And this was for a long time a relatively little known law that of course has made its way to the headlines in recent years. But what Section two 30 is, is a law that provides this broad civil immunity to websites for claims that are based on third party content. So think search engine retrieving and displaying web-based content that the search engine itself did not create, or a social media network that allows a post by a user that of course the social media network did not create those websites. Those social media networks, if they’re sued, can generally invoke section two 30 immunity for a lawsuit based on harmful content created by a third party. So section two 30 is really the foundational law that underpins the internet and gave rise to the Internet’s development in the nineties all the way through present day.
And when it comes to legal risk, I think Section two 30 is a great place to begin, not only because it tells the backstory on the growth of the internet in the nineties to present day, but it really encapsulates the broader dynamic that we’re grappling with now, which is how do decades old laws designed perhaps for a different state of the technology, how do those apply to new emerging technologies like AI and individuals, organizations, society more broadly? And really lawyers and judges are going to have to make those determinations. So we can take a deeper dive into section two 30, but before doing that, to take a step back and look at the broader landscape of legal risk arising from ai, I think you really have two buckets that you’re going to want to look to. The first is the preexisting laws that are already on the books developed, put on the books without AI in mind.
And the second are the emerging laws and guidance that bear directly on ai. So with the former, the existing laws, of course we have a lot of precedent. Well-developed bodies of law but not as applied to ai, regulatory agencies, courts, lawyers, advising their clients. Everyone’s trying to figure out how do those puzzle pieces fit. And then of course, with the latter, with emerging laws, we have laws that are directly tailored for ai, but there’s not well-developed precedent. There’s not as much robust guidance on that issue. So in that first bucket, preexisting laws, think laws and regulations on unfair and deceptive acts and practices, think non-discrimination laws, product safety and liability laws, of course, IP law, you’ve got a patchwork in the US of state tort law, liability regimes, privacy laws, use of data, and then of course our regulatory enforcement framework, F-T-C-D-O-J, consumer Financial Protection Bureau, SEC. So these are kind of the preexisting patchwork of laws that I think the next 3, 5, 10 years we’re going to see a more well-developed body of case law and perhaps amendments to statutory law along with regulatory guidance that are going to help practitioners and businesses understand how these pre-existing laws apply to these new and emerging technologies.
Dave Scriven-Young:
And those preexisting laws, though they don’t apply, they wouldn’t apply as I think you said, to social media sites, websites in general because of the protections of Section two 30. Do I have that right?
Graham H. Ryan:
So section two 30 provides a broad base of civil immunity. So for example, on the product liability laws, you have many courts which will find section two 30 immunity preempts claims based on kind of a preexisting product liability law. And that’s not in every case, but generally speaking, yes, section two 30 provides this broad base of immunity, provided that the internet website, the platform falls within the statutory definition and the claim is one that falls under Section two 30 is going to give rise to a defense based on section two 30.
Dave Scriven-Young:
Got it. Okay. So walk us through then on the technology piece, the difference between liability that Facebook might face or not based on section two 30, but then also as opposed to something like chat GPT and generative AI generally what the differences and why a generative AI product may not have that same sort of immunity.
Graham H. Ryan:
Right. And this is such an interesting issue because for decades, since 1996, you have US Courts expansively applying this section two 30 immunity kind of to further that purpose of preserving the vibrant competitive free market. And over the years, the functional scope of section two 30 has really evolved alongside internet technology. It initially covered passive intermediaries like a OL online message boards back in the nineties. And today we have recent cases where section two 30 is invoked and protects advanced social media algorithms that will filter promote personalized content. And so in the intervening period, courts have done a lot to preserve that foundational law of the internet. But there is one bright line that courts have not crossed, and that section two 30 by its text will not allow crossing Section two 30 does not extend to internet-based services that actually create or develop content.
And as we know, technology has crossed that line. The Internet’s future development and current state-of-the-art is being shaped by generative ai, which performs really an unprecedented technological role in actually creating and developing content rather than simply retrieving it like search engines have traditionally done. And so you’ve got this dynamic, this brewing dynamic where courts which have been long reluctant to disrupt the underpinnings of the internet and which have carefully adapted the section two 30 legal standards to continue to extend its protections to new technologies, they’re soon going to be faced with this question. Does section two 30 immunity which has long been stretched to expand to new technologies, will it extend to generative ai? And as we know, generative AI has a substantial role in the creation and at least in the development of content. And so courts have not yet addressed that question. We’ve heard commentary from the bench, but I think it’s likely that courts are going to want Congress to address this question.
Dave Scriven-Young:
And there was some, if I recall, some mention from a Supreme Court justice about Section two 30 and potentially the impact on or liability for generative AI content.
Graham H. Ryan:
Yeah, so this was really interesting. So in 2023, a case made its way all the way up to the Supreme Court. This was going to be the big section. Two 30 case tech companies would finally have some guidance from the high court on what are the outer bounds of section two 30 immunity. This was Gonzalez versus Google. That was a case that originated in 2015. It was ISIS terrorists that conducted these coordinated attacks across Paris killing 130 folks. And the parents of one of the victims sues Google and they say, Google, you were liable, you aided and abetted and conspired with ISIS by virtue of a lot of what YouTube was doing with its videos. So Google in defense invokes section two 30 immunity. So the district court dismisses the plaintiff’s complaint for failure to state a claim. The ninth Circuit affirms and the ninth Circuit also found that most of the plaintiff’s claims were barred by two 30, the Supreme Court grants writ to review the ninth circuit’s application of Section two 30.
So everyone’s patiently waiting. So what does the Supreme Court do? The Supreme Court issues an opinion that says we’ve declined to address the application of Section two 30 because the complaint fails to state plausible claim for relief. So the Supreme Court completely punts, but we have some interesting commentary from the bench. During oral argument. You have justices from all across the spectrum saying, effectively this is a really big deal, this section two 30 stuff, and it’s really confusing, Congress, do your job. We need you to step in and handle this. I think it was Kavanaugh who said, we need to put the burden on Congress to change that and consider these implications. And Kagan also said, isn’t this something for Congress to do? Not the court. So you have justices that are signaling to Congress. This is a big deal. It has global ramifications for everything from commerce to the way technology is developed.
Congress you need to step in. So they punt. And then even again, more recently in Moody versus net choice, which was this year, that was one of those state law content moderation restriction cases that dealt with the First Amendment. You have justices again from the bench raising section two 30 concerns. So I think the takeaway here when it comes to the Supreme Court and Section two 30 is that the Supreme Court has had the opportunity to address section two 30. They have actually granted writ with what seems to be the intent to address Section two 30, but they’ve punted and they’ve really expressed a desire for Congress to step in and to conduct some sort of section two 30 reform.
Dave Scriven-Young:
So we’ll get to Congress in a minute, but wanted to circle back to who is potential or what companies are potentially liable for generative ai because we’ve had several episodes on this podcast about AI and how it could be a useful tool to law firms and lawyers generally. And we’ve heard a lot of cautionary tales about lawyers who have used AI to their detriment having briefs filled with citations to cases that don’t exist, right? So who is at risk when it comes to potential liability for AI generated content?
Graham H. Ryan:
So that is an unresolved question and a great one that everyone’s asking to answer that question. You need to start at looking at the supply chain of these generative AI systems. You have the builders or the developers or as the EU AI Act calls them the providers, then you may have some sort of distributor and you also have your users or your deployers. The fundamental question is, what obligations fall on different folks in the supply chain? And in the US the answer is very unclear. And if we’re looking for guidance, one place to look is the landmark European Union Artificial Intelligence Act, which is really the most robust comprehensive AI governance framework that exists on the planet. And they have this different classification network where they parse out the different players and the supply chain and they impose different liabilities or different obligations that could give rise to liability on the different players.
So for example, the EU AI Act says there’s a whole lot of different types of AI systems and use cases out there. We’re going to prohibit some of them, like the ones that use subliminal techniques to materially alter behavior or those that exploit certain groups due to age or other conditions. You can’t have those. But the interesting part of the EU AI Act when it comes to these different players and who’s going to be responsible is that it focuses in on the high risk use cases for ai. Those are going to include AI in critical infrastructure or in education or employment like CV filtering, law enforcement, immigration. So they had these kind of buckets of high risk ai and it says if you’re playing in that space, if you were developing building, providing ai, or even if you’re using AI in that space, you have certain obligations.
And so for these high risk systems, the EU AI Act imposes certain obligations on developers such as maintaining, creating, and maintaining certain technical documentation, having a robust data and governance strategy, certain transparency requirements. So you have to, if you’re a developer of AI and high risk, and you’re either in the EU market or your AI is going to be used in the EU market, you have to create instructions for the use of an AI system, how to implement the system, how to implement human oversight over the system. So you actually have to create a playbook for the end user. And there’s all sorts of other obligations imposed on developers. Now for the user, and this is a question that I think has more broad application obviously, because a lot of folks are using ai, you have to those instructions I referenced kind of that little playbook, you have to follow those instructions.
You have to actually not only follow those instructions, but you have to have certain monitoring requirements for what type of outputs. And if there’s a serious incident like a hallucination that could cause harm, you have reporting requirements. So this is an interesting framework, and we’re obviously on the very front end of this globally, but the EU framework kind of lays out an allocation of responsibilities and obligations on different players based upon their role in the supply chain. So that’s one place to look. And then of course in the US, we’re still at our infancy stage. I know we haven’t kind of gotten to our discussion of Congress yet, but we do not have any comprehensive framework for AI governance in the United States
Dave Scriven-Young:
And absent any statutory requirements by Congress at this point in time, were left with what common law theories, defamation negligence, that sort of thing.
Graham H. Ryan:
When you zoom out and you look at kind of the big AI legal risk landscape, as I mentioned earlier, you’ve got those two buckets. You’ve got preexisting laws and emerging laws. So to drill down a little bit more into the existing laws that we all hear about, we all know about, let’s take IP for example, a lot of risk there. How do the principles and protections of copyright laws apply to ai? You’ve got data scraping and collection. Can the output of AI be considered original and therefore warrant copyright protection? A recent federal court in the US actually said, no, AI cannot be listed as an inventor for purposes of obtaining a patent. And so a lot of issues in the IP space, obviously. I mean, how is an AI model trained? What data is being used? Is that data copyrighted? Are there IP protections? And then another question about the violation of intellectual property rights.
I mean, how is that going to play out? I think we’re starting to see those cases play out, but that’s one big space to watch, obviously the IP space. And then another one is data protection and security. So we didn’t need to wait for the onset of chat, GPT and generative AI to understand that protection of data and privacy considerations are incredibly important. And so there’s preexisting laws there that could give rise to risk from the use of AI systems and also automated decision making. There is an increasing amount of state laws governing how and when automated decision-making technology can be used, what can it be used for? What can’t it be used used for? What obligations are imposed relating to notice to individuals when an automated decision is used to materially affect their rights? Are there opt-out rights for individuals? So a lot of considerations around automated decision making.
And then one kind of interesting point is the licensing of AI models and then also the underlying data set that is going to be used to maybe train a model if you want to license that, what are the preexisting laws and how are they going to play? Obviously IP comes into that and there’s going to need to be the data licensing terms between the parties, kind of are we designating a certain model component as a trade secret? Are we limiting the right to use certain components? So in the licensing space, when it comes to both AI models and data, and one really big point is in the regulatory space. Last year we had a joint statement on enforcement efforts against discrimination and bias in AI that was issued by the Federal Trade Commission, the DOJ, the Consumer Financial Protection Bureau, and the EEOC. And so this is kind of a rare joint statement that says, we are preexisting agencies.
We are going to use our enforcement powers to root out any discrimination and bias in general and in particular as it applies to the use of AI systems. So just because you’re using an AI system doesn’t mean you’re off the hook for these regulatory rules that have been on the books. And the FTC, actually, that is a huge place to continue to watch their developments because the FTC of course oversees enforcing laws and regulations regarding consumer protection and deceptive and unfair business practices. But they’ve signaled that they’re going to intensify their enforcement actions in the AI space. They’ve actually created the FTC Office of Technology Authorization that allows a streamlined issuance of civil investigative demands, like a subpoena to businesses to find out whether they are violating any unfair and deceptive regulations. So a lot going on in the FTC space as well.
Dave Scriven-Young:
That’s really interesting, and I’m sure we’ll see much more regulation coming out of the federal government. So do we see any congressional action on the horizon?
Graham H. Ryan:
So as I mentioned earlier, at least with respect to two 30, courts have signaled that they’d like Congress to kind of step in. So what’s going on in Congress A lot and a little? So last year there were several members of Congress demonstrating significant interest in the AI space, a lot of hearings, a lot of legislative proposals aimed at ai, but largely they’re disjointed and we’re kind of waiting for one to be the breakthrough AI framework. And if there’s one to kind of keep an eye on, I’d say this was some legislation led by majority Leader Schumer, and it was a bipartisan effort aimed at this comprehensive AI policy framework. And it’s come to be known as the safe innovation framework, and that’s going to have various guardrails. That’s the one that a lot of folks are watching. It hasn’t moved much, so keep an eye on that one.
There’s a couple of other notable developments in Congress and the AI space there is. So we know kind of these foundation models like the one built on for chat GPT. So the regulation of foundation models is something that has garnered a lot of attention in Congress. Also AI labeling. So there’s one bill that would require a disclaimer on virtually all generative AI outputs regardless of its format. Obviously that needs a little bit of refining to how that would be implemented. And it needs to clarify, I think in its current form, it doesn’t really clarify. Does that apply to users or developers of generative AI systems? I mean, back to your question about what players should be concerned about these different laws, and I think there was a rush to introduce a lot of legislative proposals and of course they would be refined throughout the legislative process, but there’s a lot to be done in making sure that there’s a more comprehensive proposal that accounts for all these different concerns. And then on the generative AI front, there’s actually a specific bill. It was Hawley and Blumenthal in the Senate and that bears directly on section two 30 immunity and generative AI that Bill would provide expressly that generative AI systems do not have section two 30 immunity. And I think that is one piece of legislation to continue to watch. I think it’s kind of a race that no one wants to finish Congress or the courts who’s going to address this fundamental question about the underpinnings of the internet and Section two 30.
Dave Scriven-Young:
Well, and it’s really interesting thinking about that dichotomy between what we’re talking about. So in the beginning of the internet, we had Congress coming out and having a very strong statement of immunity for internet service providers, for social media websites. Social media didn’t really exist in the very beginning, but it seems to me that this immunity and this act by Congress help the internet move along and progress in its kind of natural way for good or for ill in terms of society. I think a lot of people now looking back would say, well, social media has harmed society a lot. Some people on the other side would say it’s improved society a lot. So thoughts on both sides, but thinking about the development of the technology, development of the internet, it seems to me Section two 30 helped the internet progress substantially. What are all these new regulations and potential congressional lacks doing to the development or potential development of AI technology? Because it seems like we’re in a very different space socially as well as on a regulatory side.
Graham H. Ryan:
That’s right. And fundamentally that is a policy question, but with respect to the impact on the development of technology, you’ve actually got OpenAI, CEO, Sam Altman coming out and warning that the company might pull its services from the EU market in response to certain AI regulation being developed by the eu. And so I think jurisdiction matters. I think various jurisdictions around the globe are very cognizant about both the consumer protection piece and policy considerations with respect to the development of technology on their turf. So when you have open ai CEO saying, if these regulations are too stringent, we will pull our services. That’s something to take a look at. So on the other hand, if you have a large state of uncertainty and unpredictability from a regulatory standpoint, from a legal standpoint, that is going to be considered in the C-suite as well. When decisions are being made about what to develop, what resources to contribute to development, what risk mitigation efforts to put in place, and I think fundamentally for a lot of companies, not only developing AI but also for using and deploying ai, it’s all about putting in place an AI governance framework within a particular organization that is tailored to that organization that accounts for the specific AI use cases within that organization, considering the jurisdiction where they operate, the industries they operate in, what the risk tolerances, what the resources look like, and putting in place policies, procedures that ensure responsible use of the AI within a particular organization from a risk management standpoint.
And that is going to be, I think what we’re going to see over the next two or three years is a huge explosion in every organization that uses ai, which is going to be virtually every organization ensuring that they have a comprehensive AI governance strategy that probably relates to or supplements a preexisting data governance strategy or privacy strategy, and is fundamentally interoperated between risk management, the technical folks who may be implementing an IT system or an AI system, obviously kind of an AI governance committee. And so these frameworks I think are going to be very important for companies to consider going forward.
Dave Scriven-Young:
Well, Graham, unfortunately we’re at the end of our time together. It went super fast and I think we just kind scraped the surface of what we probably could have talked about during this time. But any last thoughts you want to leave with our listeners perhaps on what we can look for next coming from AI companies or perhaps from the law?
Graham H. Ryan:
Look, I think it’s very important to monitor agency statements and guidance that is continuing to be released, important to monitor congressional developments. I would say in particular the subcommittees that deal with technology issues, watch for any further executive action coming from the White House. And then of course courts, courts, they’re going to be forced to address these issues as suits are filed dealing with liability arising from ai. And I think those court opinions are going to have a huge rippling effect on the use and development of AI going forward.
Dave Scriven-Young:
Great. Well, Graham Ryan from Jones Walker, LLP, thank you so much for being on the show and we’ll be looking for those future developments.
Graham H. Ryan:
Thanks Dave.
Dave Scriven-Young:
Thank you to disco for sponsoring Litigation Radio Disco makes the law work better for everyone with cutting edge solutions that leverage AI cloud computing and data analytics to help legal professionals accelerate e-discovery and document review. Learn [email protected]. And now it’s time for our quick tip from the ABA Litigation Section. So let’s welcome back Daryl Wilson to the show. Daryl is the a, a young lawyers division speaker for the 20 23 24 bar year. In that role he serves as the chief policy and legislative officer overseeing the Wilde Assembly. Welcome back to the show, Darryl.
Speaker 3:
Thanks Dave, I appreciate it.
Dave Scriven-Young:
So I understand you’re going to be giving us tips on improving one-on-one meetings. So what’s your quick tip?
Speaker 3:
Yeah, so I want to kick off this month’s tip with really just talking about how to maximize your one-on-one with your boss. So today as we prepare to dive in, or you may be preparing for a one-on-one meeting with your boss and you’re just curious about what to do, how to be able to maximize that time and how not to have idle time during your one-on-ones, I want to just give you a few tips today on my thoughts on how you may be able to maximize your one-on-one. The first tip that I will give you is to plan ahead when preparing for your one-on-one, obviously you received the date and the time from your boss and you want to just think about what type of things you may be able to discuss during your one-on-one. The first thing that I would tell you to do in part of planning ahead is to develop an agenda.
You don’t want to waste anyone’s time by having the one-on-one and then there being awkward where you’re trying to find information to discuss during the 1 0 1 understanding that everybody’s time is valuable. So I will tell you that you want to develop this agenda to talk about certain things that may be of importance to you and also your boss during this time. You can’t expect your boss to have an agenda prepared for you already. You have to understand that this is your time and that you should take ownership of the opportunity to have that one-on-one time with your boss to discuss certain things that may be of importance to you and also maybe receive feedback. The next thing that I will tell you to do is to, during this time, you want to be able to understand and provide updates to your boss. You want your boss to know what things you’ve been working on and what’s been going on in your world as you do your job in providing the updates you want to discuss matters that you would like to have some visibility on that may look like highlighting some of your wins and some of the things that you have may have worked on in the past time from when you last had your one-on-one to your current one-on-one, look at your work product and understand what things you were able to do successfully and were able to accomplish since that one-on-one, when you may have developed a plan for yourself in idea of setting goals to perform the best at your job, you want to be able to show your confidence level on these things and show that you were able to confidently tackle a task that was provided to you or a task that you developed for yourself so that your boss has a great knowledge of what you have been working on since the last 1 0 1.
In the alternative, you also want to discuss the areas for opportunity or areas for growth that you may have for yourself. You may want to talk with your boss about things that you may need help with or that you may need their assistance to jump in and help you get a particular task across the finish line, you want to maybe talk about how your boss can best support you and what that may look like for you as you may be moving about the particular task or goal that you’ve developed for yourself in order to be able to get that assistance from your boss to show the areas that you may need some help, areas that you may need to be able to get assistance from your boss, but also you want to look for those areas for growth. If your boss provides you with some information, you want to be able to tackle those things and take notes of what they’re telling you to be able to maximize your time with your boss, but also maximize your opportunities and areas for growth within your particular role.
The next thing that I would tell you during your one-on-one is you may want to discuss your career growth, career opportunities, your career trajectory. You want to look for opportunities to be a value add to your team, to your organization, and the particular areas that you may have in your expertise to be able to be that value add to be able to potentially grow within your career and take your career to the next level. You want to seek out stretch assignments. You may be asking yourself, what is a stretch assignment? A stretch assignment is an assignment that you may take on that is outside of your normal roles, your duties for your job. This may look like if you’re an associate or if you’re in-house counsel that may be primarily dealing with litigation or legal matters. You may want to look for opportunities to maybe assist another team that may be within your organization or your law firm.
If you’re in-house, it may also look like taking on a particular role that may be in your areas of passion, whether that may be the mergers and acquisition team or how to help the organization grow. That may also look like opportunities within the diversity equity inclusion team. You just want to look at opportunities to go outside of the things that you normally do and take on a stretch assignment to show your willingness to provide support to your team, to your organization, and also you may find along the way that you may have a new discovered passion for a new area that may allow you to be a value add to your team. The next thing that you want to do is maybe even volunteer and see if there are areas of opportunities within your current team that may look like setting up some procedures and policies for your team, or it may also look like revising certain policies that may be in place.
Just look for areas that you may be able to see that you can seek out as an opportunity to show your commitment to the team, commitment to your own personal growth and commitment to the growth of the organization. Another thing that I would tell you to do is to request feedback. We always want to understand and know how we are performing on our roles, so definitely seek out the opportunity to ask your boss for feedback that puts you in a place where you need to be able to accept constructive criticism or constructive feedback for particular assignments that you may have worked on. It’s always good to take this information in, accept this feedback and utilize that feedback to channel opportunities for growth within your particular role, within your career for itself. Because if you take that information, I’m pretty sure it will be helpful for you as you go through your particular role and or maybe seek out additional roles that may be in higher leadership within your organization.
The last tip that, I’ll leave you with that after you do all these things by planning ahead and preparing that agenda for the meeting, providing updates to your boss and asking for areas for opportunities for growth, but then also looking at how to grow and elevate your career and requesting that feedback. I’ll tell you that before you leave that one-on-one with your boss, you want to make sure that you have scheduled your next one-on-one so that you can be able to show your boss that you’ve taken in that information that they’ve provided, you’ve seen things that you’ve discussed, and then you want to be able to provide another set of updates as to the things that you have talked about since your last one-on-one, and to the scheduling of that new one-on-one. And these have been my tips on the best way to truly maximize your time with your boss within A one-on-one,
Dave Scriven-Young:
So that you can help your career go to the next level. Terrific. Daryl, thanks for sharing those practical tips and thanks for being on the show today. Thank you. I appreciate it. And that’s all we have for our show today, and I’d love to hear your thoughts about today’s episode. If you have comments or question you’d like for me to answer on an upcoming show, you can contact me at d [email protected] and connect with me on social. I’m at attorney DSY on LinkedIn, Instagram X, and Facebook. You can also connect with the ABA, Litigation Section on those platforms as well. But as much as I’d like to connect with you online, nothing beats meeting you in person at one of our next litigation section events. So please make plans to join us at the 2024 Class Actions National Institute in Nashville, Tennessee, taking place October 24th and 25th.
The National Institute is the Premier Class Action Conference in the United States and is attended by practitioners on both sides of theBar, esteemed jurors in-house counsel and academics. The world of class actions moves quickly and with several recent court decisions and litigation trends. This year’s conference promises to be more relevant than ever, whether you litigate and try class action lawsuits. Want to learn more about how to best help your clients or keep up to date on current class action decisions you won’t want to miss this program. To find out more and for registration information, go to ambar.org/class actions. If you like the show, please help spread the word by sharing a link to this episode with a friend or through a post on social and invite others to join the show and community. If you want to leave a review at Apple Podcast, it’s incredibly helpful. Even a quick rating at Spotify is super helpful as well. Finally, I want to quickly thank some folks who make the show possible. Thanks, Tom. Michelle Oberts, who’s on staff for the litigation section. Thanks. Also goes out to the co-chairs of the Litigation Section’s audio contact committee, Haley Maple and Tyler, true thank you to the audio professionals from Legal Talk Network. And last but not least, thank you so much for listening. I’ll see you next time.
Notify me when there’s a new episode!
Litigation Radio |
Hosted by Dave Scriven-Young, Litigation Radio features topics focused on winning cases and developing careers for litigators.