Please ensure Javascript is enabled for purposes of website accessibility
Recent News
Home / All News / Bar calls (again) for mandatory statewide judicial evaluations

Bar calls (again) for mandatory statewide judicial evaluations

Some members of Minnesota’s legal community are frustrated after more than a decade of lobbying for a mandatory statewide system of judicial evaluations. Feeling it’s past time for such a system, proponents of statewide evaluations say the ball is now in the state’s highest court.

What the Districts are doing

First District

According to 1st Judicial District Administrator Gerald Winter, about two-thirds of the district’s 29 judges have participated in a voluntary evaluation program, and have found it to be worthwhile. The first set of evaluations was conducted a few years ago and was directed at internal staff, Winter said. Questionnaires were sent to all staff in the county where the judge being evaluated was chambered, and roughly half responded, he said. A District Court staff person collated the information. This year the evaluations are directed at the bar. Judges either provided administration with the names of attorneys who have appeared before them frequently over the last year or the administration randomly selected attorneys that have appeared in front of the judge being evaluated. Winter expects roughly half of the 40 to 50 attorneys per judge who were sent the state court questionnaires to respond. According to Winter, the evaluation process is labor intensive for both the attorneys filling out the questionnaires and the administration, which collates the data and presents it to the judge being evaluated in both oral and written form.

Second District

The last time the 2nd Judicial District conducted judicial evaluations was the fall of 1995, according to District Administrator Suzanne Alliegro. At that time, fourteen of the district’s 24 judges participated in the voluntary evaluation process, she said. Each participating judge selected four other Ramsey County judges and six other people they encountered regularly in their work, such as prosecutors, public defenders, attorneys, or support staff, she explained. The district hired an independent consultant to personally interview each of the ten people selected by each judge and then present the information both orally and in written form to the judge being evaluated, explained Alliegro. The district paid the cost of the evaluations, which was $8,400, or $600 per judge being evaluated.

After the evaluation process was completed, the consultant asked the judges whether they found the evaluation process useful: three judges said they found the process useful, nine judges said they strongly agreed that the process was useful and two judges did not respond to the question.

Third District

Third Judicial District Administrator Michelle R. Ellefson explained that the district does not have a formal policy regarding judicial evaluation. The issue was discussed last year but the judges concurred that they would rather conduct evaluations on their own than adhere to a strict plan, Ellefson said. Still, several of the district’s judges conduct their own evaluations, and many are evaluated annually, she added. Most of the judges that conduct evaluations draw from questionnaires provided by other districts that have more formal plans in place.

Fourth District

According to 4th Judicial District Administrator Mark Thompson, the vast majority of judges up for election opt to participate in the district’s voluntary evaluation program. Only those judges who are up for election are invited to be evaluated because the district has found that it is “expensive to evaluate judges properly,” Thompson explained.

Although the precise cost is difficult to measure, especially when one considers the time involved to process the questionnaires, Thompson estimates that the district spends several thousand dollars to evaluate each judge. Lengthy, comprehensive surveys are sent to people selected by the judge being evaluated — attorneys, staff, jurors or others who have regular contact with the judge, Thompson explained. Those selected complete the questionnaires and send them to a centralized neutral third party that tabulates the results and prepares a report to be presented to the judge by a facilitator chosen by the judge. According to Thompson, the process has been useful for many judges up for election.

Fifth District

According to 5th Judicial District Administrator Richard Fasnacht, roughly half of the district’s judges have participated in the voluntary evaluation process. The process is self-initiated at any time by a judge mailing up to 25 questionnaires to persons selected by the judge. Those selected may be attorneys, staff, or others who use the court. Once the questionnaires are completed they are sent directly to a Supreme Court staff person who collates the information and mails it to a feedback judge selected by the judge being evaluated. The feedback judge then meets with the judge being evaluated to go over the results. The process is highly confidential since judges are referred to by code, not by name, so the person collating the information does not know the identity of the person being evaluated.

Sixth District

Sixth Judicial District Administrator Mary L. Helf said that while the district has no formal plan for judicial evaluation, judges are encouraged to engage in whatever form of evaluation they find useful. In conducting their evaluations, judges consider the programs in place in other districts. Like other districts, the evaluations are confidential.

Seventh District

Even though the program is voluntary, 21 of the 22 judges in the 7th Judicial District have been evaluated, said District Administrator Gregory Solien. (The one judge that has not yet been evaluated was very recently appointed.) Between 20 and 25 court staff that are in regular contact with the judge being evaluated complete written questionnaires, which are then sent confidentially to the district office for compilation, said Solien. The chief judge of the district, Solien, and one other person selected by the judge being evaluated then present the results. The program is not overly expensive, and every judge has found the evaluation process to be useful, said Solien.

Eighth District

Roughly half of the judges in the 8th Judicial District have participated in the district’s internally funded voluntary evaluation program, according to Assistant District Administrator Becky Dolen. The judges being evaluated choose a group of between 40 and 80 lawyers (and sometimes support staff) to be sent questionnaires provided by the state court, and also select someone they trust, usually another judge, who will “mentor” them during the evaluation, Dolen explained. Those picked to complete questionnaires are usually attorneys who have practiced before the judge being evaluated, but need not practice within the district, she added. In order to maintain confidentiality, each judge is assigned a number and the responses are sent to the state court administrator, who collates the responses and compiles the results, continued Dolen. The “mentor” chosen by the judge being evaluated and the judge then discuss the results of the evaluation orally, she explained.

Ninth District

About half of the 9th Judicial District’s
20 judges have participated in the district’s voluntary evaluation program in the last few years, according to District Administrator Dee J. Hanson. Judges in the district are encouraged to seek judicial evaluation once per term using any method the judge finds most comfortable, explained Hanson. The evaluation process is funded by the district but the costs are minimal because judges being evaluated select around 50 or 60 attorneys and support staff from the area who are then sent questionnaires provided by the state court, continued the administrator. There is no requirement that the attorneys completing questionnaires have actually appeared before the judge being evaluated, he added. Hanson said that 60 to 70 percent of those sent questionnaires return them to a person selected by the judge being evaluated — usually another judge, a retired judge, or Hanson — who collates the responses and discusses them with the judge being evaluated. According to Hanson, the judges who have been evaluated have been “quite pleased” with the process and the results.

Tenth District

According to Sue Specht, deputy district administrator for the 10th Judicial District, confidentiality concerns prevent her from disclosing much information about the district’s judicial evaluation process. However, she did say that some of the district’s judges have participated in the voluntary evaluation process more than once, while others have opted not to participate. The district uses “various hybrid” questionnaires that are derived from the report put out by the Joint Committee on Judicial Evaluation, and the responses are kept confidential. Sometimes the judges send out the questionnaires on their own, and sometimes they ask the district to do it; either way, usually no one but the judge and the person filling out the questionnaire see the responses, said Specht. Some of the judges use the responses as a self-evaluation tool, and others go over the results with other judges who suggest ways to improve. The evaluation system is internally funded, but the cost is minimal — just costs of duplication and postage, Specht said.

Earlier this spring, the Civil Litigation Section of the Minnesota State Bar Association (MSBA) endorsed yet another proposal in a long line of resolutions calling for mandatory statewide judicial evaluations — a plan that would bring uniformity to judicial evaluation procedures throughout Minnesota. (Currently, judicial-evaluation procedures vary widely from district to district.)

Some judges and lawyers have criticized local judicial evaluation procedures — particularly the judicial performance survey recently released by the Hennepin County Bar Association (HCBA).

Unlike the HCBA poll, individual results of the proposed statewide evaluations would not be disclosed to the public, but would instead be furnished to the individual judges so that they can improve their performance.

“We’ve looked at this issue long enough. There’s no reason not to go forward,” said Minneapolis attorney George Soule, who, in addition to serving as chair of the Judicial Selection Committee, has sat on several committees studying the issue of judicial evaluation.

St. Louis Park attorney Helen M. Meyer, an at large member of the Judicial Selection Committee and former chair of the MSBA Judicial Evaluation Committee, also said that a statewide judicial evaluation program is long overdue. Meyer pointed out that although the issue was first addressed in the late 1980s — and the Supreme Court’s own pilot program recommended in 1993 that a statewide program be developed — a unified program has not yet been implemented.

“The [Minnesota] Supreme Court needs to take the leadership,” said Meyer.

Meanwhile, although Supreme Court Chief Justice Kathleen Blatz seems willing to take another look at the issue, at present the chief justice appears to advocate a more flexible approach.

“I support judicial evaluation for all judges in the state, but I would allow the districts to experiment — with the understanding that [the districts] would come back together and learn from [other districts],” said Blatz. “I’m less concerned whether [Minnesota] adopts a cookie-cutter approach than I am whether the evaluation process [provides useful information]. It’s time to take stock of where we are and get feedback from judges who have been through [the evaluation process].”

Noting that nine out of the state’s 10 judicial districts have developed programs for judicial evaluation during the last few years, Blatz observed: “We’ve come a long way since the mid-1990s.” The next step is for all 10 districts to evaluate their judges at regular intervals, the chief justice added. (See sidebar “What the districts are doing” below.)

Dispelling myths

One of the reasons frequently cited for the delay in implementing a mandatory statewide judicial evaluation plan has been the potential expense of such a venture.

But some members of the legal community, frustrated that efforts over more than a decade have not effected change, question whether money is really at the heart of the issue.

“We find money for this and that,” Soule pointed out. “[Judicial evaluation] is a basic.”

Blatz acknowledged that while funding is a “complicating factor,” it is not a “deal-breaker.”

Another concern of some opponents of statewide judicial evaluations is that the results would be used in press articles criticizing judges and by challengers in judicial elections rather than as a self-improvement tool by judges. Opponents point to the HCBA poll as exemplifying these concerns. (See, e.g. “Like lambs to the slaughter,” a viewpoint written by Judge Jack Nordby in the June 12, 2000 issue of Minnesota Lawyer.)

However, supporters of statewide judicial evaluations say that the HCBA poll should not be used as a basis for criticizing the proposed statewide system, which bears little resemblance to the HCBA’s program.

“[The HCBA survey] is a different animal than what we’re talking about,” explained Terrance W. Moore, chair of the MSBA Civil Litigation Section Judicial Evaluation Subcommittee.

Soule said he doesn’t oppose the HCBA poll, but also stressed that the goal of that poll is to inform the public, whereas the goal of a statewide judicial evaluation program would be to help judges with self-improvement. There is a definite need for mandatory statewide judicial evaluation, he added.

Unlike the HCBA evaluation procedure, the proposed statewide plan would offer confidentiality — both of response and of result, according to Moore.

The statewide evaluation process would identify areas where a judge needs improvement, but would also highlight areas where a judge is doing well, said Soule.

Why we need it

Everyone, no matter what their line of work, benefits from evaluation — and judges are no exception, observed Soule.

Proponents of statewide evaluations said that the current district-run evaluation procedures are not sufficient to meet the needs of the judiciary.

Meyer cited three problems with the district-by-district evaluation process:

• the district evaluation programs are not ma

• some programs do not require that judges send questionnaires to attorneys; and

• many programs have no provision for periodic reevaluation.

“On an anecdotal basis, the good and well-respected judges participate [in the evaluation process] and the judges who need [evaluation] the most don’t participate,” observed Meyer. And while she doesn’t think that only attorneys should be asked to evaluate judges, Meyer also pointed out that lawyers tend to provide more useful feedback than nonlawyers, especially litigants.

“Nonlawyers tend to base their evaluations on the outcome of their cases,” Meyer observed. She also emphasized that in order to fully reap the benefits of judicial evaluation, judges should be periodically reevaluated so that they can see whether they have improved.

Soule said that a major problem with the current evaluation system is that some attorneys never get the opportunity to give their input. Even though he has practiced in Hennepin County for many years and is chair of the Judicial Selections Commission, Soule has never been asked to evaluate any judge. “Nobody I’ve talked to is getting the surveys,” he said.

Moore said that adopting a standardized program throughout the state is preferable to the current system in which each of Minnesota’s 10 districts are “doing their own thing.” A standardized program will draw from and optimize the approaches that worked in the various districts, while avoiding the approaches that were less successful, explained Moore. The ability to process judicial evaluations internally and uniformly is also important, he added.

Historical perspective

The American Bar Association (ABA) has long said that the way to improve the quality of justice in the courts is to improve the performance of the judiciary — and the best way to accomplish that goal is to create and apply an effective plan for evaluating judges. Although several states have implemented statewide judicial evaluation programs, Minnesota is not yet among them — but not for lack of trying.

In a March 1998 Bench & Bar article, the MSBA Governing Council and MSBA Civil Litigation Section observed that although many of the state’s judicial districts have experimented with judicial evaluation, questions remained regarding the necessity, adequacy, and appropriate function of judicial evaluation programs.

A decade earlier, two separate groups of lawyers and judges unsuccessfully attempted to create a program to evaluate judges, and in 1990 the MSBA formally petitioned the Minnesota Supreme Court to establish the “Pilot Program on Judicial Evaluation.” Under the pilot program, which was considered to be a success, 12 trial judges and two appellate judges were evaluated. However, interest in the project diminished greatly, and the final recommendations of the Pilot Program Committee were not issued until February 1993 — nearly three years after the pilot program was launched.

Significantly, even though the committee recommended the establishment of a permanent program of confidential judicial evaluations and suggested that educational programs and training seminars be offered to help improve judicial performance in areas where evaluation results showed a need for improvement, the recommendations failed to address funding. (Six groups — the MSBA, the Minnesota State Bar Foundation, the Academy of Certified Trial Lawyers, the Minnesota District Judges’ Association, the Minnesota Trial Lawyers Association, and the Minnesota Defense Lawyers Association — had collaborated to fund the pilot program but were unable to raise enough money to move the project through the implementation phase.)

Due to lack of funding, the Pilot Program Committee’s final recommendations were not implemented. Still, the MSBA Civil Litigation Section identified judicial evaluation as an important agenda item and organized a project to reinvigorate debate on the topic during its 1994-95 term. The Governing Council also appointed a 21-member Judicial Evaluation Committee to address the issue.

The MSBA House of Delegates adopted the committee’s report in January 1995, and in March 1995, the MSBA Executive Committee agreed to “request that judicial evaluation be given higher priority on the list of court projects at the Supreme Court.”

In July 1995, then Minnesota Supreme Court Chief Justice Sandy Keith formed the “Joint Committee on Judicial Evaluation,” comprised of justices from the Supreme Court, judges from the Court of Appeals, members of the Minnesota District Judges Association Committee on Judicial Evaluation and judges from the Conference of Chief Judges. The joint committee studied alternatives to the mandatory, statewide judicial evaluation system recommended in the 1995 report. The joint committee recommended that the judicial districts, the Minnesota Court of Appeals, and the Supreme Court each develop a confidential evaluation plan for their own use and advise the joint committee by June 1, 1997. Although the joint committee had planned to issue a report in December of 1997 identifying the judicial evaluation plans implemented by the district and appellate courts, the report was not issued because not all districts had responded to its request.

By February 1998, all judicial districts with the exception of the 3rd had made a commitment to conduct judicial evaluation. Although the district-run pilot projects around the state differed in concept and execution, many similar themes emerged.

In keeping with the Supreme Court order of Jan. 11, 1996, confidentiality of results is a component of all the district-run pilot programs. Even so, the ways in which evaluation data is handled varies. In some districts, the judge being evaluated receives all completed questionnaires for storage or destruction, while in others staff or outside evaluators compile the raw data and deliver a summary of responses to the judge being evaluated. Some districts also share such information with the chief judge and court administrative staff.

For the most part, judges’ participation in the evaluation process is voluntary, and the selection of the group chosen to perform the evaluations also varies significantly across districts. For example, in some districts, staff, court reporters, and law clerks complete questionnaires but practicing attorneys do not. In other districts, attorneys are invited to participate in the evaluation process, and in many districts, the judge being evaluated selects the persons to complete the surveys.

Latest resolution

Once again, the push for a mandatory statewide judicial evaluation program is in the limelight.

“Over perhaps the last two to three years the MSBA Civil Litigation Section has been attempting to establish a system that would be useful and accepted by both the court and attorneys in terms of judicial evaluation,” said Robert Feigh, chair of the MSBA Civil Litigation Section. “[The section’s resolution is] essentially a game plan to institute a unified statewide system.”

Noting that funding remains a major, though not insurmountable problem, Feigh said: “This is a matter that is going to get a lot more attention in the near future.”

On April 12, 2000, the MSBA Civil Litigation Section endorsed a standardized plan for the evaluation of Minnesota’s judges. The resolution adopted by the section states that “the Minnesota judiciary can improve its already high standard through
a program of judicial evaluation…that is standardized throughout the state,” and that “a standardized program for all districts will enable districts to share their experiences and improve the ongoing program.”

Under the program recommended by the Civil Litigation Section, surveys would be sent to 50 to 75 attorneys and court staff who have worked with the judge being evaluated in the last three years. Confidentiality is a key component of the plan.

Responses to the surveys would be sent directly to the Minnesota Department of Administration, Management Analysis Division (MAD) for compilation, although a district could elect to compile the results internally at its own cost. The compiled results (not the surveys themselves) would be provided to the judge being evaluated and to a facilitator selected by that judge. The facilitator and the judge would then meet to discuss the results and develop a confidential personal plan to assist the judge in developing strengths and managing weaknesses. Each judge would be required to be evaluated during the second year in office, and every three years after that.

Moore stressed that the subcommittee is not motivated by a desire to oust sitting judges, and emphasized that the committee specifically recommended that the evaluations not be tied to the election cycle. “We believe that the best way to improve the judiciary is to make good judges better,” said Moore.

ALJ program

Moore pointed out that the Civil Litigation Section’s model is not unlike the mandatory judicial development program already being used to evaluate the state’s administrative law and workers’ compensation judges. The administrative program is also being touted by the ABA as a model for other states.

In the fall of 1998, Chief Administrative Law Judge Kenneth A. Nickolai contracted with MAD for assistance in designing a written questionnaire to solicit feedback from attorneys and lay people who have appeared before administrative law judges (ALJs). The questionnaires asked all respondents to rate a judge’s performance in areas concerning judicial conduct and management of proceedings — and also asked attorneys to comment on the judge’s legal knowledge and abilities. Open-ended questions allowed respondents to comment on a judge’s strengths or areas of improvement and fairness or bias.

The third-party division received and tabulated completed questionnaires, and created an individual report for each judge. In addition to reporting statistical information, the division summarized the written comments.

More than 3,500 surveys were mailed and 46 percent were returned — 75 percent of the returned questionnaires were completed by attorneys. Typically a 60 percent or higher response rate is required to accept the results as representing the population.

Even though it was disappointed with the low response rate, MAD described the results as “very positive” and observed that the results provide baseline figures for assessing judicial performance improvement efforts.

“We want to take advantage of what we’ve learned,” said Nickolai. “Our plan is a cycle of professional development … including in-house training to help [judges] strengthen their skills.”

Nickolai is optimistic that the first round of the evaluation program has quelled the bar’s concerns about confidentiality. An attorney’s responses are kept confidential from the judge being evaluated (and from administration), and the compiled results are not used against a judge, but rather are used for self-improvement, explained Nickolai.

The cost of the evaluation process is relatively low, especially when the long-term public benefits are considered, said Nickolai. Sending one judge to the National Judicial College in Reno, Nev., for one three-to-five day course costs several thousand dollars — and the judge must be taken off the calendar for that time, said Nickolai. The new program costs $662 per judge — including personnel costs, the contract with MAD and postage, said Nickolai.

Concept well-received

According to Moore, the concept of statewide judicial evaluations has been well-received.

“Rank and file judges by and large support the evaluations,” said Moore, who pointed out that the districts that implemented voluntary judicial evaluation programs reported that a good percentage of the judges opted to participate.

Soule also said that most judges are supportive of judicial evaluation.

Leave a Reply