Understanding Bootcamp Outcomes in 2024
What does it mean to "get" a job in tech in 2024? As bootcamp programs try to understand and explain employment outcomes, it's more complicated and nuanced than one might expect.
This is part three of three:
- Employment Outcomes & Fulfilling Promises
Why should bootcamps like Turing report outcomes and, more importantly, how should you try to understand them as an alum, job hunter, or prospective student? - Tech Jobs After Turing (2024)
What really happened to the people who attended Turing during the most difficult period of the tech downturn? - Understanding Bootcamp Outcomes in 2024 [this post]
Why is it so hard to gather outcomes data, and then to make meaning of it?
How Long It Takes To Get a Job
Job hunting in a disrupted market takes longer. Not only are there fewer easy-to-access opportunities, but job hunters then also often approach it as a long process. Picking up a non-technical part-time job, for instance, is a smart way to keep the lights on during a job hunt and, at the same time, it makes a job hunt take longer than if it were a full time effort. Each graduate needs to figure out the right path for them.
A job hunt that gets a significant effort of over 20 hours per week has generally been leading to interviews in around 60-120 days. Cutting that time investment down typically makes it take much longer. But doubling the effort doesn't make it that much faster.
Based on what we've seen in recent history, I tell upcoming graduates to prepare for a 3-6 month job hunt and hope/work for it to be shorter than that.
What Is a Technical Job, Really?
Graduates job hunting in a tougher market have made smart decisions to benefit themselves in the short and long term. Many have taken on paid internships. Some have engaged in unpaid work, whether for a community (like RubyForGood) or for a company as an unpaid internship. Others have worked part or full-time project-based contracts. Some take adjacent roles like Sales Engineer or Technical Writer. And the majority have signed on as full-time software developers.
These are great entry points for the individual and make reporting on aggregate outcomes more difficult. Is an internship a job? Is a part-time contract? What about full-time work that ends in under a year? You can quickly get into "case by case" consideration that erode the meaningfulness of aggregate date.
This issue is one of the key challenge which undermines CIRR-style reporting. We want to build an "apples-to-apples" comparison across training programs, but it is impossible to write definitions that are consistent across programs, germane to the student experience, and fit the moment of the market. If one program targets people who've been self-studying for 6+ months and another takes in people who are totally fresh, how do we make meaning of the average salary? How do you count time-to-hire when someone takes an unpaid internship, then a paid internship, then a full-time role? There are multiple right answers.
What's a Good Placement Rate?
It's unwise to focus on the exact percentages down to the single digits. You can look at a pool of graduates and calculate an successful outcome percentage like 71%. You could exclude some forms of employment and drive it down to 65%. You could include others or exclude more folks from the denominator (by classifying them as non-job-seeking or otherwise exempt) and drive it up to 85%.
So what do we make of it?
In this moment, I think the best we can do is ask "do most graduates get paid work in the field or not?" If the data analysis is done with integrity and results in a number over 60%, the training program is probably doing a good job preparing most of their graduates for the industry. If it's 40-60% then there are legitimate concerns and questions to be asked. And if it's below 40% there is likely a significant problem.
Gathering Employment Data
How does a training program get outcomes data on their own graduates? It's harder than you think.
If you have social security numbers and big enough cohorts, state agencies will create anonymized aggregate reports based on tax filings – we'll call it "passive external reporting." That's way creepy, incredibly slow, and doesn't capture any nuance. It's not viable.
Second you can consider "active self-reporting" – like graduates filling out a survey. This is the most widespread method and it has a lot of merit. At Turing, when students get a job we ask them to fill out an employment survey. It gives us a comprehensive picture of that person's experience.
And getting those surveys can be a lot of follow-up work. Some folks are excited to do it and others forget. As a student, would you rather your training program spend labor and money on your training or on chasing down surveys of past graduates?
What if an employed grad just doesn't fill out the survey – are we really going to mark them down as a failure? If you get an internship, is that the time to fill out the survey? If it converts to a job, do you fill out the survey again? If you leave there and get a job at a different place, new survey? What if you're contracting half-time – is that survey worthy?
Self-reported survey data is very valuable to understand the individual experience and it's still difficult to extrapolate it into an aggregate experience.
Finally, there's "passive self-reporting," particularly via LinkedIn. All self-reporting is relying on the honesty and accuracy of the individual student. Passive self-reporting is one of the easiest methods because it doesn't involve a lot of individual follow up – we believe that what people claim in public is true. Just like active self-reporting, there are problems at the margins when data is not reported correctly.
To give the most accurate picture of outcomes, we really need to blend passive and active self-reporting – which also brings in a layer of interpretation and subjectivity. It is impossible to do meaningful and honest reporting in this space without subjective interpretation.
How We Gathered Employment Data
To build our reports like Tech Jobs After Turing (2024), I've relied on a blend of active and passive self-reporting. It started with taking our graduate pool and finding all their individual LinkedIn URLs where possible. We scraped data from there to find current location, employer, job title, and whether they're "Open to Work".
I then reviewed and audited the data to look for things that don't make sense. If someone lists a role as a software developer but doesn't have a company attached, follow up to find the real story. For some grads with no (active) LinkedIn, I went back to job surveys to pull data. Some folks got a DM over Slack and were asked a few questions.
That leads to the issue of exclusions. As a training program, the temptation is to exclude as many unemployed alumni as possible so as to drive down the denominator, but it's ethically questionable. In the process of this analysis, I excluded 9 graduates for a variety of reasons including medical and family situations, pursuit of further degrees, and other extenuating circumstances.
And it's still subjective. Maybe a reader doesn't think an internship should count as a job. Maybe a person who was employed as a dev for six months but isn't currently employed should or shouldn't be counted. Maybe somebody who graduated and didn't find a tech job in 3 months and then enrolled in a Master's Degree program should be counted as a failure.
Even though we want data to be objective, making meaning of it will always be subjective.
Where We Go From Here
The bootcamp industry has been in trouble for the last two years. Some great programs have shut down. Some poor ones remain. A few new ones are even opening up. As we look into 2025, there is likely a rise in tech investment which will accelerate the market for entry level developers. So how do you find a good bootcamp program?
- Good education happens when decisions are made close to the student experience. The circle of feedback should be (students)-(staff)-(decision makers). That's what you see at every strong program in this space.
The converse is what you see in the white-labeled training programs offered particularly at the major universities across the country or the giant corporate bootcamp programs: centralized command and control, then curriculum and decisions are handed down to the campuses to be followed. Feedback doesn't flow well and the student experience suffers. - Understand that education is always a risk. Students who went into college in 2004 had no idea they'd graduate into the "Great Recession." 2019 bootcamp students couldn't see COVID coming. Students who started college in 2020 are now graduating into a tough job market across many industries.
You just can't know what's going to be on the other side. Even if our employment rate was 95%, how do you know if you're the 95% or the 5%?
Whether a person pursues a degree, certificate, or studies on their own, it's the same conclusion: the market you enter won't be exactly the same as the one you started with. The outcomes for other people don't guarantee your own.
No education can hand you an outcome on a silver platter. It's ultimately up to you. - Data reporting is important. At Turing we've lost the thread over the past year or two. As the number of CIRR-reporting schools dwindled to under 5 and the definitions didn't really capture the experience in the market, the reports lost their value.
But now we're back on it. Times have been tough. Some folks have really struggled. And a lot of folks have thrived. We're working to tell their story, support those who are still searching for their opportunity, and pushing forward.
If a program isn't making a clear effort to tell the true stories of their graduates, then we have to wonder what they're hiding. If, as a community, we can accept that the outcomes are hard to count and tricky to interpret, we can still make meaning and celebrate success.
It has been a hard road and there is a light at the end of the tunnel. When I started Turing, I set out to build a school that could last 100 years. We have at least 90 to go.