How to Fix Law Schools

By: David (law school)

The Current Problem

I've previously posted a critique of law schools. More specifically, I critiqued how law schools approach teaching and grading. For the sake of context, I want to offer a new way of thinking about the current system before diving into one solution on how to fix the problem. (Note: Though this essay is applicable to virtually all law schools, I address it specifically to Georgetown's. Also, the recommendation I make is geared toward 1L courses.)

Under the current regime, grades are determined by three factors: personal performance, peer performance, and professor preference.

  • Personal performance is how well the law student performs. Do they know the content? Can they cite appropriate cases?

  • Peer performance is the same as personal, but for--you guessed it--peers. The reason peer performance matters is because in law school there is a curve, and that means only a set percentage of students can receive each grade (e.g., only 12% can get an A). In effect, this means everyone in the class is ranked best to worst not based solely on how well they perform on the exam, but also how well they performed relative to how their peers performed. That means 23% of the class could turn in solid A material, but still, only 12% of the class will actually get an A. How do you determine who within that 23% will get the A? You compare them against one another rather than against an absolute standard. And that brings us to the third category...

  • Professor preference really takes place on at least two levels: 1) Arbitrarily (insofar as each professor weighs different factors differently) determining which of the virtually identical exams to rank higher than the others, and 2) Determining how to grade exams based on the professor's personal preference. (Some want eloquent and thoughtful paragraphs, for example, while others just want you to throw in everything you can think of. The catch is that you don't know all of their preferences before the exam.) Their approaches to grading, as far as I'm aware, are not based on research (i.e., "I grade this way because studies indicate this approach is the best way to evaluate x, y, and z) or how a student's response reflects the work of top professional lawyers and how top lawyers think. Instead, professors seem to either use whichever methods they believe are most expedient for them, or whichever method just feels right to them. Professors seem to write the same types of exams they were given when they were students, rather than considering whether that's the best approach.

This approach to grading means it's a near-certainty that though I finished in the top-third of my Section, there are probably several people who didn't finish in the top-third who actually knew the material better than me. Similarly, I probably know the material better than several people who have a higher GPA than me. The grading system as outlined above and in my previous post, along with the silly law school curve, makes comparisons almost meaningless. This meaninglessness also means law schools cannot derive meaningful data by evaluating exam performance within classrooms, between classrooms/professors, between year groups, or between schools.

Furthermore, administrators familiar with the science of adult learning could rate professor test questions under the current system and it's doubtful all professors would receive an A for question quality, relevance to the profession, hitting the most relevant parts of the subject, not being ambiguous, conveying the material effectively, and so on--assuming those are goals of the exams.

The problem with the three-pronged approach (personal performance, peer performance, and professor preference) is simple: grades should only be based on personal performance as compared to a fixed, absolute standard. By also factoring peer performance and professor preference, law schools inject a substantial amount of unhelpful and unnecessary artificial arbitrariness rather than just testing whether each student learned what they were supposed to learn.

If a couple dozen people turn in A-quality work, then they deserve an A. Similarly, if nobody in the class performed well enough to get an A, then nobody should get an A. Setting a curve (aka a quota) is a practice unsupported by any evidence-based learning theories or research I'm aware of (I have a Master of Science in adult education and a Master of Education).

In short, it's irresponsible and reckless to be entirely cognizant of the current grading structure's myriad defects and still stubbornly refuse to make any changes whatsoever. To do so while also collecting $50k a year in tuition seems almost criminal.

The Solution

The reasoning for the need to shift to a personal performance-based system is simply that if the student understands the subject/law, then that's all that should be required. So what system would be most efficient at identifying what students know? Just switching to traditional A-F letter grades without a curve?

No. Why? In addition to all of the issues associated with letter grades identified in Part 2 of my series on competency-based education (CBE), law schools must be vigilant to prevent grade inflation. It's not difficult to envision a non-curve law school where almost all students receive an A even when most students only turn in average-quality work, because it already occurs in other programs.

So what's a law school to do? I recommend a competency-based system. I've already written a 13-part series on why competency-based grading is superior to letter grades, including:

  1. It's far more objective. In fact, it's the most objective form of scalable assessment I'm aware of.

  2. It provides meaningful/useful info, unlike letter grades. What does a B+ student know that a B student doesn't? Hint: There is no way of knowing by merely looking at the letter grade. This is doubly true in a system that uses a curve.

Let's explore how law schools, specifically, could leverage a CBE system.

CBE for Law Schools

The best way to convey the CBE concept is to show how it could be implemented. It's not scary, extraordinarily difficult, unduly costly, or anything else that provides a legitimate excuse for not doing it. The only reason not to explore it, it seems, is intransigence or laziness.

The school should first only focus on 1L-year courses. The reason is that everyone takes the same core courses their first year and therefore there is a big pool of professors who can help. Also, it makes sense to first fix the first year of law school before trying to fix the second or third years, because, presumably, the reason we all take the same set of courses our first year is that they provide a necessary foundation for future learning.

Competency Creation

To create the competencies, the school will likely have to pause traditional exam creation. Pivoting like this isn't novel; Facebook did something similar with desktop development in 2012. Facebook determined that mobile was the future, so they pivoted hard and shifted their entire focus to getting their mobile app up and running while putting a freeze on all desktop projects. If they can pivot during a year in which they were a publicly traded company and therefore under intense scrutiny, then surely a private law school can pivot as well.

Things to consider when developing competencies:

  1. Difficulty level. Torts I questions should be less difficult than Torts II questions. But the goal of law school isn't to weed people out. Instead, focus on making competencies that test knowledge and abilities in course objectives and that can be built upon. Don't make some competencies super difficult just to see who can answer them best. Each competency should aim to measure a specific concept or set of concepts that are necessary to become a thoughtful and effective lawyer. Put another way, there should be a distinct purpose for each competency--not testing for the sake of testing.

  2. Duration. Some competencies may only require listing elements or factors, while others may require complex answers and critical thinking. Some may take three minutes, while others take an hour. It's important that when developing the competencies, the professors and administrators note a "Time to Complete" estimate next to each one so professors can ensure their exams will be a reasonable length as they select various competencies for their exams. All competencies of the same expected duration should be of a very similar difficulty level.

To create competencies, professors should first research and debate the following: What makes a skilled lawyer? Should we place more emphasis on readiness to pass the bar, or preparation for a successful career? Does the proposed competency provide meaningful information to future employers? What is the best way to assess desired skills? (My suspicion is that few professors have undergone extensive training in learning theories and neuroscience, and those without such knowledge also do not seek the assistance of evaluation experts.)

In short, professors should work together to create the competencies they believe law students should learn. Faculty should be able to justify each competency with specific facts, including empirical research whenever applicable. The school may even want to work with other law schools to develop competencies, combining their resources and brainpower. If the competencies are not created as part of a collaboration with other law schools, then the university should only admit competencies from other schools by a majority vote by faculty. This helps maintain high standards via self-policing.

After debate and competency creation, professors would need to curate the competencies by culling the best from all the proposed competencies. Instead of each contracts professor writing his or her own individual exam, all of them should agree which of the several competencies should join an approved competency bank. When voting whether to admit a competency, the professors should do so blindly (don't know whose proposal it is unless the creator(s) have openly discussed it, which is totally fine) and anonymously (don't know who voted for or against it) so as much bias as possible is removed, people feel free to be honest, and the competency is debated and approved on the merits.

If a professor is unable to convince his or her peers that his or her idea should join the competency bank, then that indicates the suggested competency is probably not as good and useful as he or she believes it is. This peer-approval system acts as a checks and balances apparatus similar to peer-reviewed papers. By requiring the approval of the majority of professors and administrators on the committee, there is a system for separating the wheat from the chaff and ensuring an output of high-quality, relevant competencies.

Regardless of the source (i.e. whether or not it originated from a Georgetown professor), once adopted, the law school should make all competencies viewable to the public and specifically share them with other universities. The is no reason to hoard all the good ideas. There may be some fear of free loaders, but by failing to act for that reason, we would find ourselves on an undesirable end of a Prisoner's Dilemma scenario.

There shouldn't be a "gotcha question" on exams. Making the standards visible gives the students time to clarify the standard, ask their professor how to build/acquire the skill and knowledge to answer it, and so on. There isn't a reason to hide this. If the standards are legit, then the only way to "game" the system is to build those abilities that will make students great lawyers.

However, the specific exam question shouldn't be visible to students ahead of time. That is, they'll know the skill/ability expected, but they won't know what to apply it to until test day. They also won't know which particular competencies from the bank they'll be tested on.

Competency Example

The U.S. Army's approach to training using Task, Conditions, and Standards (T/C/S) is a useful framework. I would also add an example answer, making it T/C/S/E. Here's an example:

  • Task: Identify every instance in the following paragraph(s) when consent could be at issue. Argue on behalf of both the officer and the suspect for why consent is or isn't required each time, and cite case law that supports each side's argument.

  • Conditions: Complete this as part of the on-campus final exam. This segment should take approximately 30 minutes.

  • Standard: Grading will be as follows: One point for each conflict identified, one point for citing an appropriate case for each side's argument for each conflict, and one point for applying each case to the given facts in the paragraph(s). For example, if there are ten total possible conflicts, then there would be 30 total possible points (one point for identifying, one for case citations, and one point for applying the law to the facts, times 10 conflicts). You must earn at least 80% of the total number of possible points to pass this competency (i.e., at least 24/30).

  • Example Answer for One Conflict: "Officer Tom may need to obtain Bob's consent before opening the box. Jones v. Smith says officers can't open boxes unless there is verbal consent and a handshake agreement. Doe v. Rogers, though, says a wink from an officer is necessary and sufficient. Officer Tom winked before opening the box, and Doe v. Rogers is more recent than Jones v. Smith, so Officer Tom opening the box was legal.


  • In the above example, the student would need to earn at least 24 of the 30 possible points (80%) to pass the competency. Because each point is important, professors must be clear in their instructions.

  • Professors should give students practice questions throughout the semester to acquaint students with this mode of assessment (see paragraph on formative assessments below).

  • Again, the students wouldn't know this competency specifically, out of all the criminal justice competencies in the bank, would be on the final exam, and they wouldn't know the specifics of the scenario and content of the paragraphs until they took the exam. However, they would recognize the T/C/S/E and already know they should understand the concept of consent inside and out in case the professor decided to test that competency at the end of the semester. They would also be able to ask clarifying questions about the competency during the semester, which may help professors identify ambiguities and craft more precise--and therefore useful--metrics and questions prior to exam day.

  • Finally, not only would students not know which competencies would be on the final exams, they also would not know the specific scenario (aka fact pattern) the professor will use on the exam. The only thing they'd receive prior tot he exam is access to all the possible T/C/S/E's.

CBE Implementation

The next step after creation and adoption is for the professors to choose which competencies from the bank they want to use on their final exams. This requires coordination regarding the difficulty level/duration, agreement on the baseline, and agreement on the number of competencies they'll test.

Difficulty level/duration. Professors must ensure their exams are all more or less equal to one another when covering the same topic in the same semester (i.e., all professors teaching contracts in the fall semester would need to coordinate with one another). The reason is that if one professor gave all the simplest competencies to her class, while another professor gave all the most difficult ones to his class, then the whole idea of having a Dean's List would continue to be a joke because student comparison would be as unfair and meaningless as it is now. Additionally, all competencies of similar duration should be worth equal weight, and all subjects should have the same exam durations.

Baseline. The professors of a topic (I will keep using contracts as the example) should test on at least the same baseline competencies. These are the competencies that all contracts professors will use in a given semester.

Number. In addition to difficulty level, it's important all professors of a topic 1L year offer the same number of competencies so it's a fair comparison when considering the Dean's List.

Here is an Example:

  • Suppose the contracts professors agree the exam will test a total of eight competencies for a 4-hour exam.

  • The contracts professors may agree their exams will have two 5-minute competencies, one 20-minute competency, three 30-minute competencies, and two 1-hour competencies.

  • Three of those eight competencies (the three the professors believe are the most important from the list of competencies in the bank) should be the same on each contract professor's exam. These are the baseline competencies. This ensures all professors teach at least a fundamental core of the most important elements of their subject area that all aspiring lawyers should know.

  • Each professor may then choose the other five competencies from the bank to round out the exam based on what they focused on over the semester in their course and/or what they think is most important (as long as they meet the agreed duration specifications). That is, not all contract professors would necessarily have the same eight competencies as one another, but their exams would all be roughly the same length and level of difficulty.

Usefulness of Competencies

1) It provides a fair way to evaluate students

All exams would take place on campus to ensure equal and fair testing environments for everyone, and students must pass a set percentage or number (whichever the university decides to measure) of competencies to pass the course. For example, they may need to pass six of eight competencies from their contracts exam in order to receive a passing grade in that course.

The list of competencies, along with "pass" or "not pass" next to each, will be on the student's transcript. (There is no "fail," even if a student only gets one badge out of 10. It's on him to explain it to employers. Ideally, he'll have a chance to retake an exam--though with different competencies or fact patterns--in the summer.)

The Dean's List and a commendation with a status equal to "Top 10%" (perhaps a title of "Legal Scholar"?) would be based on the number or percentage of competencies each student mastered. The standard should be established prior to the start of the semester. If all students will have the opportunity to pass a total of 40 competencies in a semester (for example, 10 in torts, 10 in criminal justice, 10 in contracts, and 10 in civil procedure), then passing 34 could get you on the Dean's List and passing 38 makes you a Legal Scholar, for instance. This would also mean we no longer need to use a silly grading curve to differentiate between students. The valedictorian would just be the student who passed the minimum number of required competencies to graduate and has the highest percentage of passed vs attempted competencies upon graduating. If two students have the same percentage, then the student with more total competencies would be valedictorian.

2) It's objective

Granted, a competency-based system isn't entirely objective, but it minimizes the subjectivity so that two professors grading the same papers are highly likely to award the same evaluation. That is, grading would no longer be so arbitrary and based as much on professors and peers as on personal performance.

3) It allows for an appeal process to ensure fairness

Another benefit of this CBE program is that unlike under the current system, where grades are final, with CBE the school could implement something similar to the National Football League's (NFL) challenge rule: Each student would be allotted two "challenges" per semester to challenge a grade on a single competency (e.g., could challenge one competency from contracts and one competency from torts). A third challenge can be awarded to a student only if they are successful in both of their previous challenges. If one or both of those challenges is denied, then the student does not get a third challenge. The number of challenges made by the student, as well as a tally of how many were affirmed, should appear on the student's transcript. This could discourage an over-reliance on challenges/frivolous challenges. The grading would be blindly evaluated by a 3-person committee of professors (the professors would not know who the student is or who taught that particular course). Membership on the committee would rotate, and the members could receive a stipend to compensate them for the effort.

4) It provides a meaningful way to evaluate professors

In addition to the competencies allowing professors to more fairly evaluate students, this approach also allows administrators to more accurately evaluate professors. For example, if a large number of students fail to pass a competency, it may suggest the professor didn't teach it well. Of course, professors may try to artificially boost their number's to look better. Therefore, a random sample of exams should be audited. As fortune would have it, using a competency-based evaluation system makes audits possible. Unlike current grading, which is more of a black box (see my Why Law School Grades are Bogus post), competency-based grading allows others to grade the same exams objectively, and to consistently reach the same results.

5) It encourages better teaching

Finally, competency-based systems may spur better teaching. Why? Competencies encourage teaching to objectives rather than to the textbook. Therefore, this approach may be more likely to encourage using multimedia, diagrams, pictures, graphs, tables, actively connecting cases and their consequences to one another, etc. The reason is that each contracts professor may use a different textbook from one another, but they are all aiming to reach the same goal (100% of students pass all competencies), so the exam becomes less centered on the textbook and its particular cases, and more oriented to the legal concept the textbooks are trying to convey.


Professors lose autonomy

  • Not really. They still choose which competencies to test on. If they want to test on something that isn't a competency, they need only convince a majority of fellow professors that the competency is appropriate and should be added to the competency bank.

  • The university could make an exception to allow no more than one competency for each class to be entirely subjective in nature so professors could ask open-ended questions like, "Do you believe the Supreme Court is too political?"

Professors must teach to the test

  • No. The professor still picks what the exam will be about specifically, including the content of the question. The only real change is that the grading of the content becomes far more objective.

Other Considerations

  1. The professors should sprinkle formative assessments throughout the semester. Ideally, there would be one after each mini unit of learning (For criminal justice: after covering consent, another after search, another after Miranda, etc.). The formative (as opposed to summative) assessments should be mandatory for students to take so they can familiarize themselves with the new exam format, but the assessments need not necessarily be for a grade. Also, the formative assessments need not go through the same approval process as competency questions, though it wouldn't hurt. Each assessment could be as short as 3-5 short answer questions. Students should be encouraged to discuss it with professors. The professor could then post a professor-created sample answer, key points students should've mentioned, references to page numbers students can go to refresh their memories, and helpful links and learning resources.

  2. The university needs to prioritize professor time. I couldn't find a Georgetown Law mission statement, but I'm assuming the goal is to produce the best lawyers possible. Or the best critical thinkers. Or something similar. It must be along the lines of providing a useful/meaningful education, which means the value the law school places on learning should easily trounce professors creating scholarly articles, attending conferences, public speaking engagements, TV and radio appearances, newspaper editorials, etc. (Fortunately, at least one Georgetown professor agrees). The development of 1,500+ law students each year must take precedence over the creation of often esoteric and little-noticed law review articles. That's not to say we can't have both and that articles aren't valuable, but as the Academic Life office likes to say, we need to have priorities. The law school could even incentivize the professors by offering perks/bonuses for each competency they create that is adopted by the competency committee the first year of implementation.

  3. A benefit of being the first adopter of a competency system would be that we could set the bar/standard. Everyone who adopts the system after us would be compared to our standards. Employers (law firms, government agencies, nonprofits, etc.) could then say "standards from the following schools are preferred" and then list the universities it thinks are adequately challenging. Such an outcome would lead to a virtuous cycle, where universities aim to have to highest achievable standards so they would be recognized by the most prestigious employers. Eventually, it would be ideal to have a consortium of law schools with a formalized bank of competencies so students and professors could be fairly compared across schools. Such an approach would even the playing field so we'd know if the student from Yale really knows the content better than the student from Arizona State University, or if the brand name, LSAT score, and family connections are actually pulling most of the weight when applying for jobs. Schools would no longer serve as the main proxy of how good a law student might be because the student's list of competencies would make it clear.