Who gets credit for science? Often, it’s not women
In science, the ultimate measure of academic worth is the number of papers published where you’re credited as an author. There are subtleties that matter—where you are in the list of authors and whether others cite your publications. But it’s hard for those factors to overcome the weight of raw numbers. Other things, like grants and promotions, also matter a great deal. But success in those areas often depends on a large publication list.
That’s why a publication released on Wednesday by Nature is significant: It describes data that indicate that women are systematically left off the list of authors of scientific publications. The gap between participation and publication continues even after various factors of career advancement are considered. And it goes a long way toward explaining why science has a problem called a “leaky pipeline,” where women drop out of research at higher rates at each stage of their careers.
Making the team
It’s pretty easy to crunch the data and see that women are underrepresented in author lists attached to scientific papers. But figuring out why is a significant challenge. It could result from women being historically underrepresented in some fields, discrimination, or differences in effort and commitment. Figuring out which factor(s) contribute is challenging because it involves identifying an invisible population: the people who should be on the author list but aren’t.
Complicating matters is that there are no clear rules regarding what sort of contributions are needed to receive authorship. Members of a lab often help each other informally, and there’s no clear boundary where that sort of help rises to the point where it demands authorship. As a result, a large amount of politics goes into who ends up on the author list, and often lots of bad feelings among those who don’t make the cut.
If you ask a scientist about their publication history, they’ll invariably have a story about a paper they should have been credited for but were left off.
The big challenge facing the researchers behind the new paper is figuring out how to discriminate between the equivalent of office politics and the existence of widespread bias. The key bit of enabling data comes from the Institute for Research on Innovation and Science at the University of Michigan, which gathers data on more than 100 campuses that are part of 36 research universities. (For example, the University of California system is one university, but it has nine campuses, including UCLA, UC Santa Cruz, and UC Berkeley.) This data includes every grant held by researchers, any employees that grant money supports, and their job titles.
The data allowed researchers to identify 128,859 people who were part of nearly 10,000 individual research teams. Those names were then cross-referenced to databases of scientific publications, linking individuals to nearly 40,000 papers and more than 7,500 patents. This data collection allowed researchers to address a more focused question: If a scientific team is publishing successfully, are there any patterns to which team members are authors of those publications?
https://arstechnica.com/?p=1862348