Justin Cary is an attorney at Drummond Woodsum where he represents private and public entities and Tribal...
Luc Cary is associate director of machine learning at Invitae. Luc has over a decade of experience...
Jared D. Correia, Esq. is the CEO of Red Cave Law Firm Consulting, which offers subscription-based law...
Published: | March 8, 2024 |
Podcast: | Legal Toolkit |
Category: | Legal Entertainment , Legal Technology , News & Current Events |
Adjusting to the new and persistent presence of AI in the legal world isn’t easy, but it is necessary. Jared welcomes brothers Justin and Luc Cary, an attorney and software engineer, respectively, to unpack the current state of AI in business and legal uses. They discuss current safety issues, emerging guardrails, best practices for research, AI in legal tech software, and more.
Later on in the Rump Roast, the guys play “Brother Up”, where Justin and Luc must identify famous brothers in history and pop culture.
And, should you be entering confidential information into generative AI tools? Well, there are layers, folks, but in the current AI environment, lawyers should never assume their data is secure. That being said, Jared doesn’t want to throw a wet blanket on the AI revolution, he just wants you to know the three simple things that will enable you to use these tools safely and ethically.
Justin Cary is an attorney at Drummond Woodsum.
Luc Cary is associate director of machine learning at Invitae.
Since we have the Cary brothers with us this week, here’s a playlist all about siblings!
Our opening track is Two Cigarettes by Major Label Interest.
Our closing track is Promised Land by Brian Brown
Special thanks to our sponsors Clio, CosmoLex, TimeSolv, and iManage.
Speaker 1:
It is a Legal Toolkit with Jared Correia with guests, Justin and Luc Cary. We play a Round of Brother Up. And then did you know Jared’s a master of the Paleontological Arts? Neither did we because that’s not a thing, but he is still going to tell us all about dinosaurs. But first your host Jared Correia
Jared Correia:
Yeah, that’s right. It’s time for the Legal Toolkit podcast. So I’ve got that dog in me today, even if it’s just a little chihuahua like the Taco Bell dog. And yes, it’s still called the Legal Toolkit podcast, even though I have no idea what tight bond three Ultimate wood glue does, that’s a real thing. I guess though I’m pretty sure you shouldn’t masturbate with it. I’m your host Jerry Correia. You’re stuck with me because Andy Garcia is unavailable. He’s too busy providing me with new tool references as well as sucking and the Godfather’s three. I’m the CEO of Red Cave Law Firm Consulting, a business management consulting service for attorneys and bar associations. Find us [email protected]. I’m the CEO of Gideon software and intake platform for law firms. Learn more and schedule a [email protected]. Now, before we get to our interview today about AI featuring Justin and Luc Cary, brothers working in the legal and software industries respectively, I wanted to take a moment to talk even more about AI because why the fuck not?
I mean, seriously, the week we’re recording this, I’m doing four presentations on ai. Lawyers are just insatiable right now. It turns out AI is the ultimate aphrodisiac for the special attorney in your life. Lawyers are asking me regularly about data security and confidentiality with respect to inputting data into AI tools. That’s especially true of generative AI tools, which everyone is starting to use or at least is aware of. And the question is always, should I put confidential data about my clients or cases into AI products? And the assumption that people tend to make when asking that question is that it applies to only one software at a time, but that may not be the case, especially as we move forward in time. There are going to be increasing layers as to how AI is used in technology products, and that includes for legal technology products.
Of course, it’s sort of already happening. And lemme tell you, this shit is tough to drill down on even for single programs. So let’s talk about chat. GBT for example. That’s the most popular generative AI tool in the world right now. Even the least tech savvy, most dedicated Ludi attorneys are trying it out, as I mentioned. And there are a number of different ways that data is treated in chat. GBT, depending on the version, if you’re accessing free version of chat GPT, your data is going to be used to train the model by default. But in both chat, GPT and chat GPT plus, that’s a paid version of software at a reasonable 20 bucks a month. You can also access data controls and turn off chat history, which means that your prompt data won’t be used to train the model. Now, a couple of weeks ago, OpenAI, that’s the company that launched and manages chat GPT, they released a helpful blog post about the use of temporary chat that the system won’t remember as well as how memory functions in the system.
It’s worth a read in this context if you want to learn more about this. Now, there’s also a chat GPT enterprise, which is a business subscription model. And that on the other hand follows open AI’s API guidelines, which means that data inputs, which called prompts won’t be used to train the model by default. That means you won’t have to toggle anything off. Are you lost yet? Of course, that’s a lot of layers to peel off, but it’s not all that different from the way that software systems have generally worked. The more you pay, the more features and security you get. And while other generative AI tools are not as mature as open ais, I suspect a similar arrangement is going to be adopted by those other products as well moving forward. Now, let’s enter Thunderdome if you will. So imagine you’re using AI tools within your case management software.
If that feature is available. Now, it’s not likely the case that that particular software company is developed and is using its own large language model to run those operations. That necessarily means then that it’s using someone else’s model and it’s probably white labeling it so it looks like it’s part of their product. The advantage there is that they can use that model that’s already prebuilt and usually some of the functions around it all while leveraging your case data as well, which they maintain in system that’ll give you more targeted results more quickly. That’s a good thing. But in that context, you don’t have the same control about how data’s managed in that base system. You can’t, for example, Pierce that white label veil if you even know it’s there to adjust that system the way you want to. When it comes to data security, you sort of have to trust that your primary vendor is doing it right.
Well sort of stay tuned because you can get aggressive about trying unearth that process. Then what happens when you’re using an integration with AI components? The same story may apply in terms of who owns the underlying tool, not the vendor you may expect. And in this scenario, you’re in an even further remove data that you’re inputting into. One system can get pushed aligned to a second system that processes it using ai. Throw in a bridging tool like Zapier for when a direct integration isn’t available. And we’ve got yet another layer in place. Now, just so you know, I’m not trying to make your life miserable or throw cold water on the AI revolution, which I absolutely think is a thing that is happening. The use of AI by law firms is going to be impactful if not more impactful as when law firms started adopting cloud solutions.
I think that AI is in fact going to be perfectly helpful right up until it isn’t. So when it goes rogue and kills us all, but until then, good, the immediate issue is how should you proceed in this current environment as an attorney? And there are three things that you should be thinking about doing, but there’s no real big plot twist here. It’s going to be the same three things you needed to do to review and manage the software you have right now before AI was ever commercially available. It’s just that you’re probably not doing it. So first, if you’re going to use a software tool for your law firm business and as any software AI or not like Dropbox or Google Workspace, et cetera, et cetera, pick up the paid version. It’s been said many times if you’re not paying for the product, you are the product.
And this has never been truer of a product than an AI software because it requires high volume testing to improve its functioning. Now, second, you need to vet every software you want to use before you buy it and you need to understand its features. And that’s in line with the comment is either going to be six or eight depending on your state to rule 1.1. Thus competency rule for lawyers, which indicates that lawyers is part of their professional competency need to be aware of the benefits and risks of the technologies they use in their practice. They need to keep abreast of that information. So the same way you would do in engaging any other vendor, you ask a lot of questions of those providers that are trying to sell you AI tools. Basically, you want to act like an annoying little kid, find out where and how the data lifts, ask why and how it moves.
Find out whether the vendor has built a proprietary tool for AI or relying on another vendor’s solution. That’s that white label and integration issue. Ask if your data’s being used to train the system or multiple systems, ask about their data security and confidentiality policies, record all of the information you acquire and store it somewhere. Then when you’re comfortable, make your choice and buy the product. But unfortunately, your work isn’t done at that point. Not quite yet, at least You also have to stay on top of the technology you acquire. And this is a particularly vexing thing when it comes to AI because the sheer pace of the technology advancement is staggering more so than in almost any technology tool or tool set I’ve seen. So once you gain an understanding of the present date version of the technologies you are using, then you get a continue to learn about the development of those technologies by doing things like getting updated via the vendor’s websites, including through blog posts, news releases, knowledge bases, following industry publications and thought leaders, setting up your own news alerts via search engines and maybe more.
But that’s a pretty good start, honestly. And I do feel like this is something of a minimum competency standard, at least at that level. It’s sort of similar to making sure you update your desktop softwares, which was a much bigger deal back in the day when not as many people were using cloud. Thus, we arrive at the third and last prong. See, I can still use lawyer words. You need to do these things to build a competent understanding of the AI tools and how you use them. But now you got to dive into the contracts. You got to read and understand the contracts that you have with your vendors. Now, this should be the easiest one of all, because your freaking attorney, every technology company you’re going to work with will have a service level agreement, SLA or end user license agreement, EU LA a. So if you are engaging a new vendor, review the entire contract and pay attention to the AI provisions.
Search tools are really helpful here. You can find references to AI and artificial intelligence pretty easily. Now the other little twist here is that, and this is a twist, if your current vendor has added AI features or tools, which a lot have, take a look at how their contracts have changed. If they didn’t have those tools before, and if the contract hasn’t changed, I’d reach out to find out why. Pretty simple right now I’m just fucking with you. I know this is a pain in the ass and just another notch on your to-do list, right? But you can get through it. I believe in you after all, don’t you want to maintain the last couple of decades you have left before the AI starts to manage you. Now, before we talk with Luc and Justin in an AI and Molly fueled romp, let’s hear a little something, something from our sponsors.
Then we’ll get into a rump roast that’s all about brotherly love. Okay everybody, it’s time. Let’s get to the meat in the middle of this legal podcasting sandwich. Today’s meat is simply an AI generated image of meat. I’m so sorry. Alright, then let’s interview our guests. We have today as first time guests on the Legal Toolkit Show, but longtime brothers who you can’t see ’em, but they’re a matching LL bean vests right now. Totally appropriate. I think we have Justin Cary, who’s an attorney at Drummond Woods in Maine, and Luc Cary who’s the associate director of machine learning at Invitae. Alright, Justin, Luc, how are you guys doing? Welcome to the show.
Justin Cary:
Hey, Jared, how are you? Great to be on here. Thank you for having us. Thanks for letting me bring my brother along. Doing great. Thanks for bringing me.
Jared Correia:
I’ve been looking forward to doing this for a little while and I’m glad we got you scheduled. Since there’s two of you, I’ll indicate who I’m asking questions to and you guys can jump in afterwards as well. If one brother gives an answer and the other brother wants to comment. But Justin, we’re doing a legal podcast here. So I want to start with you. You’re an attorney. You have this abiding interest in ai. I’ve heard you speak on the topic a number of times, really eloquent about it. I do a very nice job with it. So what drives your interest in artificial intelligence and then how does it intersect with your law practice?
Justin Cary:
I am interested, I think the origin story was Luc would always play with the precursor to Lego robotics or mind sweepers, I think. Is that what you’d play with Luc?
Luc Cary:
Lego, Mindstorm,
Justin Cary:
Lego mind? Yeah, Mindstorm and I never understood what was going on, and that ignorance has continued to this day. Whenever I talk about LLMs and all areas of AI with Luc, I feel like I’m just in a world that is not English, and I am fascinated by it because it feels like it’s starting to creep into every area of practice for our law firm. So the firm I’m at is about 200 attorneys and we all, everything from municipal to tribal law to representing schools, and we end up doing all these sorts of things. But everyone has started to be a little bit on edge about this whole AI thing. And I guess to answer the second part, how it’s crept into practice. I mean, a lot of private employers and public entities have started to ask what policy should we have? How do we train people? How do we adjust? How do students in a classroom, which maybe we can get into later, how do we know whether they’re actually writing things or not? So I feel like that coming from a place of ignorance myself, I’ve always been interested in how we integrate this stuff. So I think Luc’s maybe to blame for all of it.
Jared Correia:
All right. So that’s a perfect segue, Luc. I want to get into that AI stuff, but before I do, can you talk about what you do in your job on a day-to-day basis?
Luc Cary:
Yeah, absolutely. Yeah, so I’ve been at this company in Vita that I work at now for four years. I’m the manager of our applied AI team. We do everything from building our own, usually small in-house models, to integrating open source or cloud models like GPT-3 0.5 or GPT-4 into application software development applications for our company, which is a genetic testing company. A lot of those applications are used for either helping to speed up operations for certain tasks, helping with interpreting results. Most of it’s human in the loop AI stuff, but we own the entire software development process from collecting training data for models to evaluating and training different models, to integrating them, deploying them to production, making sure they do well, monitoring models in the wild. And yeah, I’ve been in the field for a little over a decade now of software development in general. I got my start at Twitter before it was X worked
Jared Correia:
Good old days.
Luc Cary:
Yeah, before Elon worked for several academic organizations, Columbia, MIT, doing some
Jared Correia:
Small schools. Yes. Gotcha.
Luc Cary:
Yeah, pretty big spots and doing everything from IT, support, DevOpsy type stuff, administration software, as well as lots of development work.
Jared Correia:
So not law school, clearly. Nope. Not because of all those words you said, and people are probably like, what the fuck did this guy just say? Okay, so let’s run this back for a second. Large language models, artificial intelligence. I’ve done I think four AI presentations this week for legal organizations, and a lot of people haven’t even tried chat GPT once. So could you distill a reasonable definition of AI and large language models that you think a lay person would understand?
Luc Cary:
Sure. I would say ai, artificial intelligence is best defined by trying to get computers to do tasks that humans do. Whether that’s recognizing a picture of something as a dog or being able to say that handwritten zero or other number on my check is in fact a zero. So that I think is a good definition of AI and includes machine learning as a subset of potential artificial intelligence. And then large language models, LLMs comes from this space of natural language processing within AI where you’re trying to train a model that can understand and really generate or predict the likelihood of the next letter or word in a sequence of letters or words. And that’s where when models like GPT come in, where really large language models have completely opened up the field of applications for language models by being pretty good at deceiving people into thinking this might’ve been produced by a human, whether it’s a legal document or even a picture of something like some of the image generation models have grown and become great enough where it’s like, oh, this could have been painted by a person or something.
Jared Correia:
Okay, that was pretty good. I like that. Alright, Justin, I’m going to return to you for a second here. Given Luc’s definitions, which I think are pretty good and understandable, what does a business person slash attorney need to know about AI right now?
Luc Cary:
I think,
Justin Cary:
What was the recent Saam Altman ask? The 300 trillion question. That question is,
Jared Correia:
Well, if you answer it, you might be a wealthy man. Go on.
Justin Cary:
Well, I don’t know. I don’t know the answer. I’m passing the buck to someone else because I truly think that we see all these studies. GitHub has recently conducted a study where programmers are 55% more efficient when they’re using copilot. That’s GitHub copilot. So there might be a vested interest there, but Right,
Jared Correia:
Perhaps.
Justin Cary:
Yeah,
Luc Cary:
There have been studies refuting that have come out recently as well. It’s a contentious topic,
Justin Cary:
But I think the main point is just there’s not that metric yet. There’s not even the baseline study and then the peer reviewed study. I think that there’s now just this ambiguity in our space when I do these conferences and Jared, we’ve done them together, I’ve done them on my own. The first question I always ask is, how many people in the audience have used generative AI in some professional capacity? And it’s like three people and they’re all under the age of 40, which is five people in the audience. So the question is always met with blank stares. And I think that the integration we’ve really lagged behind as a profession. Oh
Jared Correia:
God.
Justin Cary:
And that’s just with the chatbots and chatbots aren’t the end all of generative ai. So it’s
Jared Correia:
Right. Well, it’s funny because I think most people when they think of ai, they think of, okay, that’s chat GPT, right? But there’s tons of different ways that AI manifests itself. So let’s talk a little bit about the risk assessment here. Justin, I’m going to stay with you for this one. Lawyers are no fun at parties. Luc, you may know this because they’re always assessing risk and trying to fix it, but you just talked about this is the wild west right now. There’s no baselines of any kind. It’s hard to assess the risk here in a defined way. And when lawyers look at that, they’re like, no, I won’t engage this at all. And they have a hard time. We were doing a panel last night and one of the panelists was like, well, we’re still building guardrails here. And every lawyer was trying not to throw up in their mouths. So how do you get past that if you do, and what’s safe and what’s not safe? I know I’m asking you all these big questions, but I know you can handle it.
Justin Cary:
We need a regulatory framework. EUAI has been pretty good at establishing the foundations, but we need something like that in the states. I think that it’s probably going to come from the federal government. It’s not going to be one of Biden’s executive orders. It’s going to be something that’s as a result of a committee of AI professionals. I think that it’s still going to be completely lacking all the nuance that we need and everything that gets developed thereafter, trying to interpret it in rulemaking or states trying to understand it. It’s also the latency period is going to be huge there. It’s always going to lag behind the technology. Even the stuff that was made to address S like GPT-3 has no bearing on multimodal ai. So I think we’re always going to be behind from a regulatory standpoint.
Jared Correia:
And I don’t think Joe Biden’s fucking around with AI and being like, Dr. Jill, look what I built on Chad GPT. I don’t think that’s happening. So part of the problem is we got 80-year-old politicians trying to figure this out, and they don’t know their ass from their elbow when it comes to technology. So Luc, as somebody who’s a developer who’s kind of building products like this, I assume that your view of risk and testing and assessment is a little bit different than an attorney’s would be.
Luc Cary:
Yeah, I think my view is probably a lot more technical. What can we technically do to eliminate risks? So how can we get some kind of dataset where we can do an offline evaluation of how an AI model performs on a given task before we even talk about beta testing it or bringing it into an application? How can we try and keep humans in the loop when we deploy an application? And then how can we do monitoring in the wild of the quality of the model and the stability of the model and the drift of data over time, things like that. But in terms of risks, the risks are real. Even if you have all of these safeguards in place from a technical perspective of really good evaluation data sets, and you have all sorts of dashboards that track the drift of models and make sure the quality on the data coming in is good, you still have the model injecting bias that people might learn and then tacitly accept without realizing it. And then that may be incorporated into decisions about a patient’s care or a case that impacts someone’s life or something like that depend on the application. So risk is real and it’s all about how can you manage it upfront, during and after. What’s your process for improvement? How do you limit the blast radius for the areas where we know things can be bad, like models can hallucinate and create misinformation, and how do you create safeguards against that? It’s hard. A lot of these are second and third order effects in terms of controlling risk.
Jared Correia:
I’m so glad you brought up the hallucination thing. We’re going to talk about that in a second. But before we do, what do you think of this notion of disclaimers through the generative AI products? I ask a legal question. It says, go hire a lawyer, and then I rephrase the question and it gives me an answer. What’s that appropriate level of control there at this juncture? Is there one?
Luc Cary:
I don’t know. I think it makes sense to include disclaimers on upfront pages when you log in or terms of service or things like that. But people start to ignore them really quickly and jailbreak the models as soon as they’re confronted by, I want an answer, the model won’t give it to me. How can I work around what it’s saying? Can I say, well, here’s a story about my grandmother, and in this story, she really wants to hear this answer, and now can you provide the answer? And usually all of these jailbreaking techniques actually work pretty well for these models. So the constant disclaimers, there’s been a lot of negative feedback, especially with chat GPT and some of the other Gemini Bard, like other competitive models where people say, just do away with them. We don’t want to see them constantly coming at us.
Justin Cary:
I mean, I have to feel like in my mind, if you’re actually committed to building a pipe bomb in your basement, you’re not going to be deterred by GPT saying it’s unethical to build a pipe bomb in your basement. So really, the people who are losing the quality of what the generative AI could otherwise provide because of the proliferation of disclaimers are the people who you shouldn’t be worried about doing some terrible thing with the model. And I feel like there’s enough AI models out there that are open source, the premium AI models that the Geminis, the open ais, the Bing or Bard, these aren’t going to be the ones people use to do the terrible thing that all the disclaimers are mostly worried about. In my mind, I feel like that’s where the disclaimers should back off, and then we should maybe have a little bit more nuance with the user can actually have some degree of education when using these models.
Jared Correia:
Yeah, I think that’s fair. And then not even terrible, but just mildly irritating things to some professionals, like accessing self-help legal. Okay. So Luc, you brought up this hallucination thing, and I don’t know if you and Justin have been talking about this, but there’s been a lot of news stories lately about lawyers who are generating legal briefs in generative AI tools. They have fake cases. The AI is hallucinating, so what does that mean and why does it happen?
Luc Cary:
Yeah, so I think Justin already mentioned the Avianca case, which is one of those earlier, maybe the earliest case that became really well publicized where I think, so yeah, a lawyer got slapped on the wrist for actually was forced to now get wet signatures in each document that provide, because of using a brief that referenced fake cases generated by your model. And the only fact checking that lawyer did was asking the model were these real cases or not, which if the model’s already making things up, it will just make up the answer. Yes, they’re real
Jared Correia:
Sure, bill, these cases are real,
Luc Cary:
Whatever you want. But yeah, so what’s happening under the hood, there is these models when predicting and generating the next word or combining different facts and pieces of information in its pre-training corpus, the set of information it was trained on, which for some of the large models is a huge chunk of the internet and synthesizing that into information that may be entirely fabricated. And the problem with the model is in many cases, depending on the model and the application, because I say that because there are some applications now where you can ground the answers with first sources, but chat GBT doesn’t always do that when you’re chatting with it. And the hallucination comes about as a result of synthesizing information that doesn’t necessarily belong to other and isn’t an accurate answer. And when you asked if it is an accurate answer, the model can’t say for sure.
Jared Correia:
That was a good explanation. Okay. So Justin, back to you. You’re a lawyer, AI may be hallucinating on you. How do you deal with that? Is it simply double checking the information? Yes. Or is there more to it
Justin Cary:
If there’s more to it? I don’t know. I think it’s like with anything else that you use for legal research. So people have this conception of legal research as confined to the revered halls of some August institution or all the
Jared Correia:
Places Luc used to work. All right, go ahead.
Justin Cary:
Yeah, basically, but Lexi Nexus or Westlaw, and most of the lawyers I talk to are using Google Scholar using Reddit to find the raw data they need, and then they’ll go confirm before they make the argument to a judge. And so it’s the same thing when if I am looking up what the statute of limitations are for a certain crime, I’m then watching my Reddit page, I’m looking at r slash lawyering and then this sub thread of statute of limitations, and then I click on this comment and it’s great, and it says all of these things and it cites to a case, and it’s not like I copy and paste that into my brief. I’m going to go to Westlaw, I’m going to ize that site, I’m going to make sure that it’s good authority, and only then will I put it into my brief. And I think that these skills have been there all along. They’re hammered out in law school before generative AI existed, I went to law school and that’s really the only thing I can say. I went to law school before and now the same principles apply. Check the site, check the work, and it is such a base line skill that it’s surprising everyone’s getting all up in arms about it.
Jared Correia:
I know. Okay. Sticking with you, Justin, for one more question. Legal Tech has started to implement AI here and there. What do you think of the current products and where do you think this goes?
Justin Cary:
I think the current are good. I’ve used Westlaw’s AI products, case text has a good product. I haven’t experimented as widely as you have Jared with a lot of the products, but in my mind, the advantages outweigh the disadvantages. And through a couple common sense procedures, you can be almost as certain as before AI existed that you’re not making the mistakes that are commonly attributed to using AI in practice.
Jared Correia:
Excellent. Okay, Justin, Luc, thank you for coming on. That was fun. Will you stick around for one last segment? We got one more thing to do with you?
Luc Cary:
Absolutely. Sure.
Jared Correia:
All right. We’ll take one final sponsor break so you can hear more about what our sponsor companies are doing and their latest offerings. Then stay tuned. As always, for the Rump Roast, it’s even more supple than the Roast Beast. Welcome to the rear end of the Legal Toolkit podcast. That’s right, it’s the Rump Roast. It’s a grab bag of short form topics. All of my choosing, you may be wondering why do I get to pick? Well, that’s because I’m the host now, Justin and Luc, aside from my brother and I who’s been on the show before, you’re the first set of brothers to ever appear on the podcast. So I obviously want to take advantage of this opportunity by quizzing you on famous sets of brothers other than yourselves. It’s a segment. I’m going to call Brother up, but before we do that, as I recall, I did a lot of fucked up shit to my siblings when I was younger. So I want to ask you guys, what’s the worst thing you did to each other when you were kids? Who wants to start
Luc Cary:
Justin? I get it. I got a lead with this one. We used to zip Justin up into a snowsuit and throw him into a bush that had a hornet’s nest in it, and we did it multiple times and we put on, my dad had, it wasn’t like a welding goggles, but some kind of face shield, and it would be in the middle of the summer and he would come out and all the hornets would be sticking into the snowsuit. They wouldn’t sting ’em, and you could pluck them off of it, but that was not nice.
Jared Correia:
That’s pretty shitty. Okay. Justin, how did you award this behavior?
Justin Cary:
I became a lawyer because I realized the cosmic unfairness of the universe and that I had to learn the rules to protect myself. No, I don’t know. I feel like I didn’t do that much like Luc was three and a half years older than I was. He was bigger than I was and smarter. So we played stick wars a lot where we’d
Jared Correia:
Go in our backyard stick. Is that what I think it is? Just beating each other’s sticks,
Justin Cary:
Not beating, throwing them at each other. So you would have one side of a tree and then someone else would be like two feet away on the other side, and you’d actually throw them instead of hit them. But yeah, probably a good headshot or torso shot with a stick was the worst. I did
Jared Correia:
A little more strategy involved in stick wars than just slashing at each other with sticks. All right. That was better than I could have hoped. Alright, let’s let the games begin. I got five trivia questions for you. We’re going to start off with a real easy one. This pair of brothers were first in flight. I know you guys know RightRight
Justin Cary:
Brothers.
Jared Correia:
Yeah, yeah. Were one for one. Okay, let’s make it a little tougher. These famous rappers are actually brothers. I’m going to give you multiple choice. 50 cent and Big Daddy Kane, snow and Tupac or Warren G and Dr. Dre, 50 cent and Big Daddy Kane, snow and Tupac or Warren G and Dr. Dre, actual brothers. Who do you think it is?
Justin Cary:
I’m going with C.
Luc Cary:
C. I was leaning towards C too,
Jared Correia:
Correct? And I tried to trick you here because Warren G and Dr. Dre are actually stepbrothers. Dr. Dre’s mom married Warren G’s dad. You didn’t go for snow in Tupac, huh? That was probably the right choice. Alright, you’re two for two. We’ll see if you can get to five for five. These three brothers are actually Bears. Bears the animals sometimes you see black bears in Maine. Here are your choices. Grizz, Panda and Ice Bear, Ursa Major, Ursa Minor and Willis, or Sunny Brownie and Salmon Chase. So Grizz, Panda and Ice Bear, Issa Major, Issa Minor and Willis or Sonny, brownie and Salmon and Chase Bear Brothers.
Justin Cary:
I don’t know. Are they like brothers? Literally brothers or somebody? Is this like an Instagram bear trio or something? Yeah.
Luc Cary:
What is C?
Jared Correia:
They could be adopted brothers also. They could even be different types of bears.
Luc Cary:
I’m leaning towards B. What do you think, Jess?
Justin Cary:
Yeah. B sounds right.
Jared Correia:
B is incorrect, but this is good. You guys are doing great. Grizz Panda and Ice Bear are on a, I think it’s a Comedy Central show or maybe Cartoon Network called We Bear Bears. Have you ever seen this show? No. Really funny show that my kids watch. It’s a polar bear, a panda and a grizzly bear, and they’re a set of brothers. Everybody check that out. Alright, we’re still doing good. You’re like over 50%. I’m pleased with the results so far. Here’s another one that I think is probably in your wheelhouse. You just might find these brothers at a Dunking Donuts location. Ben and Casey Affleck, Frank and Jesse James are Kain and Abel,
Justin Cary:
The Aflac brothers, right? Yeah.
Jared Correia:
One of my favorite s and l sketches ever. That’s the vanilla nut taps episode. That’s great. What’s your favorite flavor of coffee? Alright, you guys drove great. Three out of four, we got one left. These twin brothers lived the Sweet Life on Disney Channel for three seasons. Fred and Ben Savage, Cole and Dylan Sprouse or Mortimer and Mickey Mouse. I think you guys were ready for this one.
Justin Cary:
No, I thought you were going to say Drake and Josh.
Jared Correia:
No, no. See, I’m trying to attest. Okay, which shows on the Disney Channel did you guys watch?
Justin Cary:
We didn’t have cable, so of
Jared Correia:
Course, of course you didn’t. Disadvantages you were outside beating the shit out of each other with Sticks. Okay. Fred and Ben Savage, Cole and Dylan Sprouse or Mortimer and Mickey Mouse
Justin Cary:
Cole and Dylan Sprouse is my guess. Yeah, I’m with
Jared Correia:
That. Correct? Correct. All right. You guys remember the Wonder Years with Fred Savage? That was way back in the day. Oh, wow. I’m so fucking old. Okay,
Justin Cary:
Watch. They had videos of the public announcements channel where when the dump was open, it’s like the dumps open nine to 1130, then lunch is 1130 to 1230. That’s what you watch? Yeah. Then it pan onto the next screen and it’s like a knitting community event, and that’s what the programming was where we were picking up the signal.
Jared Correia:
Everybody dumps open, got to get the fuck out of here. You guys kicked ass on the Remp roast. The AI stuff was great. This was a lot of fun. Thank you for coming on. I appreciate it.
Justin Cary:
Yeah. Thanks again. Thanks, Jared.
Jared Correia:
If you want to find out more about Justin and his work at Drummond Woods, visit dwm law.com. That’s DWM law.com. If you want to find out more about Luc and his work at Invitae, visit invitae.com. That’s I-N-V-I-T-A e.com, I-N-V-I-T-A e.com. Check ’em both out. I’m sure their mom will be happy. Now, for those of you listening in Medi Be’s Maine, we’ve got a Spotify playlist that’s going to knock your wool socks right off your feet, perhaps onto the LL Bean vest you have on your floor. It’s songs about siblings, brothers, sisters, stepbrothers, and stepsisters. It’s all on the table. Now, sadly, I’ve run out of time today to tell you why I think the Tyrannosaurus wrecks has such small arms, but I will let you know that I believe that part of the reason is so that they could work on the carnival circuit. This is Jared Correia reminding you to take off your shoes before you come into my house or I’ll fucking kill you.
Notify me when there’s a new episode!
Legal Toolkit |
Legal Toolkit highlights services, ideas, and programs that will improve lawyers' practices and workflow.