If you dropped in on the Prospect Research, Management and Analytics (PRMA) team at Duke University on any given day, you would likely find us in the midst of a wealth screening. But if you had suggested wealth screenings to the team just a few years, they probably all would have quit.
Screening work used to be a major stressor. Maybe you feel that way now, or maybe you’re like me and thoroughly enjoy screening work. Either way, you can optimize the process, and make your life easier. What follows are some suggested modifications that will hopefully relieve some of the stress of screening, while also leading to better results.
The Old Way
In PRMA’s ideal world, screening work would take up 20 percent of our time. After all, it is just one of many aspects of our roles in prospect research. We want the time we spend on screenings to be time that is beneficial to the overall Duke Development community and its priorities (i.e., we get back good ratings), while not being a process that feels negative for us, the analysts. Annually, we screen between 20,000 and 40,000 records. Our team of eight analysts typically verifies the information returned from the vendor for 10 percent of those records.
Our former screening program had its strong points, but it also left many things to be desired. Unfortunately, the program was:
- mostly reactive, completing screening projects in the order in which they were requested, without too much consideration for the bigger picture;
- inconsistent in the capturing and storing of data;
- limited in the prioritization of vendor results before verification work began; and
- focused on quantity over quality.
Inconsistency in the process combined with a 15-minute time limit per prospect made screening verification something that often did not feel good for the team. Analysts knew they missed information despite often going over the time limit. They were stressed that they were “breaking the rules” while also not providing a complete picture of the prospect.
When we took a step back to look at our data, we realized that while our team had verified a large number of prospects, the majority of the ratings were below the major gift level and often ended up being reviewed and changed. We asked ourselves, “Why are we spending so much time and energy on something that wasn’t getting us the results we or our partners desired?” We started to think more about how screening work fit into our priorities and goals.
Making a Change
In PRMA, we screen because identifying wealth is one of our core responsibilities. In addition to the goal of identifying prospects that are major gift level and above, one of our big goals is to be more proactive. Looking at our reactive requests showed us that the work we had been doing during screening verification was not enough; we still received numerous requests for research on prospects that had been rated in a screening. In order to free up more time to be proactive, we needed to get a handle on these reactive requests, which we realized we could do by being more intentional and consistent in capturing additional information during screening verification work.
We started to implement changes to the way we were thinking about the screening process; instead of focusing just on verification work, we looked at all of the steps included. We made a few changes to our planning, execution and delivery processes that had a major impact. We:
- designated screening program leaders,
- shifted the verification focus to quality over quantity, and
- standardized the capturing of data.
A coworker and I, who are both senior prospect research strategists, became the designated screening leaders in PRMA. In this role, we work with PRMA and development leadership to plan the timing of screenings and priorities. We are the point people for the rest of the team for all things screenings, including verification and asset questions, and we work with everyone to make sure instructions are clear and deadlines are met. We also did a lot of the legwork of coming up with a standardized screening spreadsheet and rules of thumb that simplify the process for our team.
We made a few changes to our planning, execution and delivery processes that had a major impact.
In order to be more proactive and strategic in choosing our screening populations, we had conversations with leadership and fundraisers to hear their ideas and prospecting focus areas. This also gave us an opportunity to explain our screening process. Now that fundraisers are more involved with the process and better understand it, we have found that they are more satisfied with the results, even if they aren’t all million-dollar prospects. Through these conversations, we have been able to better align our screening work with development priorities, and we have adjusted our annual screening timeline to fit the wants and needs of the community. We came up with screenings that recur annually (reunion classes, ~300 records verified; incoming parents, ~400 records verified; a selected school or unit, ~300-400 records verified) and also built out time to focus on our $1 million-plus pipeline. We round out each year with two targeted screenings that vary in focus depending on the priorities and needs for that year.
After determining priorities, our next focus was coming up with the general timing of each screening, where it falls in the fiscal year, and how it gets worked into PRMA’s workload. We plan our screenings before the start of each fiscal year. It is important to us that our team knows (at least generally) what is coming so that each analyst can prioritize and plan their work accordingly. We have two types of screenings: those that are more reactive and have time constraints imposed by our fundraising partners and those that are more proactive and flexible in their timing. In laying out the year, we are able to see where those proactive screenings best fit around the ones that have more time constraints. We worked really hard to be realistic about timing and how this work would fit into our team’s workload. We were able to come up with a general timing standard fairly easily: Each member of the PRMA team is reasonably able to verify 10 screening prospects per week in addition to their other work responsibilities. We came up with this calculation by looking at numbers from past screenings and then having conversations with the team to discuss how the timing felt for those screenings.
Handling Vendor Data + Our Criteria for Verification
Because we schedule our screenings out ahead of time, we generally know how we are going to use our allotment of records from the vendor for each screening. Some of the populations that we screen each year have their own built-in criteria for inclusion. For example, every year we screen incoming parents; this number doesn’t vary much, as it is tied to admission numbers. After taking into consideration the number of records that are included in the annual screenings, whatever we have left in our allotment from the vendor we can then dedicate to the other screenings we want to complete that year.
Once the records are screened by the vendor and the results are returned to us, we look at the results to determine which records will get verified by our team. Some of our inclusion criteria are used for every screening, but a lot varies depending on the focus of the particular screening. In PRMA, we have the benefit of having a dedicated analytics team to help us look at the data and determine which of the prospects are most likely to be rated at higher levels; but this work is possible even if you don’t have this resource. Some of the criteria we have used to include prospects in verification work include:
- main residence in a wealthy zip code or block group;
- certain job titles (e.g., C-suite, founder, partner, entrepreneur, managing director);
- main regions of fundraiser travel; and
- prior giving to Duke.
Verification Spreadsheets + Team Training
After the verification population has been determined, we divide the list among the analysts and drop the data into our standardized screening verification spreadsheet. Before distributing the spreadsheets to the team, we make sure to inform the team on anything unique to the particular screening, or any modifications to the spreadsheet.
As all of our analysts have participated in numerous screenings, we now simply include some reminders and notes in an email that we send out to the team. When we originally switched up our process, however, Elise and I spent a good deal of time working with our team to train them. We started with multiple group training sessions on topics such as:
- Process: the basics of verification/rating work, in which we covered resources to use, triangulating identified assets, etc.;
- Housekeeping: the basics of how to fill out our screening spreadsheet, in which we covered consistency in capturing data, required fields, what goes where, etc.; and
- Assumptions: general if/then rules, for example: based on feedback from fundraisers that our ratings skew quite conservative — one of the main if/then rules we established was that if the prospect’s rating is within 10 percent of the next rating level, then the rating should be bumped to that next level — unless there’s very good reason not to do so.
We also spent time reading articles and going through examples to help round out the team’s understanding of wealth distribution, intergenerational wealth transfer and unconventional indicators that could lead them to more appropriate ratings. All of this training work set our team up for success and has led them to feel empowered in their decision-making and rating work.
We have set the team up for success in decision-making and have been strategic and realistic with the timing of screening work.
There are two main changes we made to the actual rating work involved in verification: eliminating time limits per name and using a standardized verification workbook. As I mentioned before, we previously had a 15-minute time limit per name, a rule that led to a lot of stress for the analysts. We have now completely eliminated the 15-minute time limit per name. Instead, we focus on overall deadlines, checking in periodically with the team to assess their status. This change removes the pressure to rush a rating when an analyst suspects that a bit of extra digging could really pay off, and it empowers our team members to make these kinds of decisions.
We feel comfortable with these changes because we have set the team up for success in decision-making and have been strategic and realistic with the timing of screening work. There are clear expectations that are set at the beginning of each screening project. Further, because of the time we spend on determining which prospects to verify, the ratings provided by our team are focused on the best prospects in each population. The mentality around screening work shifted greatly once analysts started seeing the higher ratings that were resulting from their work. At its core, screening work is about getting through a lot of names in a short period of time. That is still true for us; we aren’t doing exhaustive research on these prospects, but we are still able to be thorough and are capturing a lot of information through the use of a standardized screening workbook.
Verification Spreadsheets – The Details
Our screening workbook contains multiple spreadsheets: ratings formula, additional data, explanations and examples. We have made some tweaks over time, but it has mostly stayed consistent, so the analysts have gotten comfortable using it. The ratings formula spreadsheet has columns wherein asset values are entered and automatically included into our capacity formula, eliminating the need for an analyst to do any actual calculations related to our capacity formula. There are also fields showing how far the rating is from the next level (in both dollar amount and percent) so that analysts can quickly decide whether to keep digging on a prospect or move on, or bump to the next rating level. The additional data sheet has columns for all the information that is generally found in the course of verification research. Instead of going into our database and updating data on a case-by-case basis, analysts can copy and paste all of this information into the spreadsheet. Typical fields in this tab include a rating comment, contact information changes and biographies.
We capture a lot of information in our spreadsheets, and for the most part, it gets saved somewhere in our database. Once Elise and I do a general review of the team’s verification spreadsheet, all of the data is compiled to be uploaded. Even without an IT team or the ability to bulk load data, having consistent, organized spreadsheets should at the very least simplify the process of updating data in your database. You also might consider how you could get others, like student workers or volunteers, involved.
Completing a Screening
Once verification work is complete and the data has been loaded to our database, it is tempting to consider a screening complete, but this is not where the process ends. Our next step is sharing our results with the development community. How you share the information is likely to vary depending on the screening. If you had more input from partners along the way, you will likely want to share more information with them directly after completion. We never share our verification spreadsheets with anyone outside our department, but we will often have each analyst send informational and summary emails to their partners after screening work is completed. This is a good touchpoint with our partners and it lets them know all we have been working on in PRMA. This can be a simple summary of the screening and ratings that were newly added to our database, or it can be a more in-depth picture of the results from their territories including copies of the prospect comments that were loaded with each rating.
Even changes that seem small can make a big impact on how things go the next time.
We also create a document to share with our executive leadership; we want to show them how much value we bring to the organization. These documents are not long or complex. We give a brief overview of the screening project and then give tables of the results, letting the numbers do the talking.
The very last step of each screening is taking time to review the process. Even changes that seem small can make a big impact on how things go the next time. In fact, the percent to next level field upon which we now heavily rely was not an original part of our process or spreadsheet. It is an idea that came out of review!
Our Results and Conclusion
After reading this article, my hope is that you’ll be able to garner some ideas from the changes we made to our screening process, and that you, too, will be able to relieve some of the stress of screening while also leading to better results. Since making these changes to our screening process, each of our screenings result in:
- more highly rated prospects, and more consistent ratings across the team;
- fewer reactive requests from our partners; and
- more data captured and added to our database with greater accuracy.
Our process is still evolving and will continue to improve over time. The more we talk together as a team in PRMA, and the more we speak with our partners, the better our process has become.
Molly Hamrick is a senior prospect research strategist on the PRMA team at Duke University. Prior to joining Duke in 2015, she worked for seven years in development research with Georgia Tech and North Carolina State University. Originally from Kings Mountain, NC, Molly received a B.A. in Economics and Management & Society from UNC Chapel Hill. She has also completed coursework toward a Master of Public Administration degree.