A Research Proposal
Illustrated Law is a strong proof of concept that there can be a very different approach to learning law. Dense bricks of text are not the best or only way to convey legal knowledge. The vast majority of learning in law school doesn’t occur in the classroom, as even universities explicitly recommend professors assign two hours of reading for every one hour of class.
But what if we could either cut the reading time substantially, or help student learn much, much more each week?
I have a research study proposal that would seek to either validate or invalidate this hypothesis: Illustrated Law materials are more effective for helping students comprehend legal information than traditional textbooks or full court opinions.
The study would recruit at least 30 undergraduate senior students from a selective university.
Why 30? I’d like to divide them into three groups, and it would be nice to have at least ten people in each group. The more, the better, so we can limit the effects of outliers.
Why undergraduates? The study would work best if the participants have no familiarity with law school or legal training. This ensures everyone is equally ignorant of how to read legal opinions. It puts them on the same footing as first year law students before their first law class and provides a universal baseline.
Why seniors at a selective university? The participants should reflect the types of students most likely to attend law schools so the results can reasonably reflect how first-year law students might learn.
The 30+ participants would be randomly assigned to one of three groups using a random number generator 15 minutes prior to the experiment. This would limit any tampering or advantage they might attain from any outside source.
The three groups would be: Illustrated Law, textbook, and original opinion. The exam observers would assign a number to each person to put on their exam rather than their name so they remain anonymous.
A disinterested 3rd party law professor who has no connection to Illustrated Law would come up with 3-5 short answer questions for any landmark Supreme Court case of their choosing. The questions would be of comparable quality to what they might ask 1L students on a semester final exam—that is, not crazy difficult, but not super simple. The professor could even cull the questions from exams they’ve given 1L students in the past.
The professor would not reveal which case the questions would be about until two hours before the experiment. This would prevent Illustrated Law from modifying its content to supplement the chosen case in any manner.
Two hours prior to the experiment, we would go to the law school bookstore, assign all constitutional law books with the chosen case a number (1 through x), and then use a random number generator to choose a textbook. Presumably all casebook at a law school bookstore are of high quality, or else professors wouldn’t use them.
We would also find the complete original opinion of the chosen case and print enough copies for each participant in the full-opinion group.
The participants in the Illustrated Law group would just use the website and therefore there would be no prep work aside from removing the paywall for the duration of the experiment.
Participants could use any portion of the website, textbook, or opinion.
Once everyone is seated with laptops in standby mode, casebooks closed, opinions face-down, and exams face-down, the participants would be instructed to begin.
The participants would have 3- to 4 hours to read the opinion and answer the 3- to 5 questions from the law professor. All responses would be typed into a word document on a laptop, just as they are for final exams at law school. They would not be allowed to use the internet, outside notes, or check their cell phones. Once time is up, they must cease writing.
Observers in the room would ensure nobody cheated. They would also ensure everyone writes the time they complete their exam so we could later see if one method of learning is significantly more efficient.
Once the time ends, the observers would gather all the exams electronically. They would then print them out and randomly shuffle them together.
The exams would then go to the professor who would grade each paper on 0-100 scale, and rank them 1 through x-number of participants. The professor would not know which source the participant used on the exam because they’re shuffled and anonymous. They would base their grade only on the quality of the answer and how well it conveys comprehension of the Court opinion.
We would then decode the papers to see which group each of the authors of each paper was in. We’d then note how they ranked and how big the gap was between scores (e.g. was the score gap between the top-ranked paper and bottom-rnaked paper only 11%, or was it 78%?). We’d also note how long it took to write each answer (this might reveal that some groups finished faster, but scored poorer—perhaps due to distress from not understanding, or perhaps due to overconfidence, for example).
If the professor used questions from actual previous 1L exams, we could see how the answers of 1Ls with a semester of learning under their belt performed compared to undergraduates with only a couple hours to read the opinion.
Using this information, we should be able to discern if one method of learning is clearly superior. We could also go back and briefly interview all participants or specific participants based on how they performed.
This is only my first run at developing a suitable study to gauge the effectiveness of study materials, but you get the idea. What do you think?