Open Data and You

Lessons from open science for the nonprofit sector in data sharing.

Photo by Alexander Grey on Unsplash

We are excited about new advances in open data in the sciences and the implications for data sharing in the nonprofit sector. Federal agencies have recently created an interdisciplinary decentralized network of repositories to make data sharing easier for researchers. Despite this new infrastructure, many researchers are still not willing to share their data, despite the benefits of data sharing to the field at large. We should be encouraging norms that reward researchers who share data, as it takes a lot of work for them to do so (especially data that is shared by FAIR standards - meaning findable, accessible, interoperable, and reusable). 


One challenge in creating these norms is that successful data sharing is ultimately a people problem, not a tech or infrastructural one - making it harder to address. Despite the issues in data sharing, there are many examples of scholars that have created repositories that serve as data sharing hubs for their respective fields, built by those ‘on the ground’ - signalling the type of culture shift that should make open data sharing easier in the sciences. Unfortunately, these hubs still suffer from low engagement among the more senior academics, leading to a lack of momentum even when interest among some people has been piqued. 


Another challenge is a lack of training for researchers who are new to sharing information within the community, resulting in messy or ineffective submissions of data. Improperly structured “low quality” data can easily overwhelm a community and any repositories that are lacking trained, dedicated staff (i.e, most of them). Many researchers believe data sharing is important, but few actually make their data public once their research is complete, as this article in the Atlantic discusses. “Information stinginess” can also be attributed to the cost of data transfers - often more than researchers are willing (or able) to pay.


The biggest challenge, though, is in addressing the ‘culture of secrecy’ that exists in the disciplines where collaboration is less of a norm. Increasing transparency in the sciences goes beyond just researchers. Journals, publishers, universities, funding agencies, and industry professionals need to help with helping to share research materials and establishing protocols for data storage and sharing so that the resulting outputs can actually be useful. Exploring how to address this dynamic is especially relevant for understanding how to enable and promote data sharing in the nonprofit sector, where collaboration can be difficult to initiate.


 While researchers and professionals all agree that data sharing is an important and necessary step towards a more collaborative future, the issues that hinder progress are ‘people problems’ which are harder to navigate. We wonder how we can act as catalysts to create better standards and incentives around data sharing and what this culture shift would look like in the nonprofit sector. 

Ajah FinancePrototyping
Deregulation, unrestricted funding, and focusing on the right (data) problem...what we're reading this week

A round-up of articles that have caught our eye in the last few weeks

From UX Indonesia on Unsplash

All of our articles this week are about funding, philanthropy and data. 


  1. We read (and listened) to this conversation from representatives in the philanthropic sector about what is next for data in the social sector. Participants spoke about the shift in power that data can give to the sector - enabling changemakers to access data can transform the work and create more effective work.

    Woodrow Rosenbaum, one of our partners at GivingTuesday, shared that “effective data use in the nonprofit sector is not a technology problem” but rather, a problem of framing the questions we want answers to. We wanted to pull this quote out because it is a central tenet of our approach to data in the sector. Addressing only technology is insufficient for making progress. 

  2. We read about possible reasons that funders are still not giving (warning: paywall) unrestricted support to nonprofits despite an apparent shift that seemed to be starting in the spring of 2020. This is an interesting debate and we are excited that people are talking about it. The author highlights a lack of trust in the sector from donors, which was interesting to us.

  3. The CRA released new draft guidelines about granting to non-qualified donees. There are several big shifts in these guidelines and lots of implications for the sector. We hope there is follow-up research and discussion about the impact of these changes - including negative ones. The long term implications of loosening restrictions on foundations to grant more broadly are complex - and it’s not clear what those impacts will be. We want to see work to evaluate whether the impact matches the intention of this policy change.

Jesse BournsPrototyping
What we're reading this week...

A round-up of articles that have caught our eye in the last few weeks

From UX Indonesia on Unsplash

Two interesting things we read on lately are all about auditors - otherwise known here at Ajah as the dark horses of social change. 

  1. We are a little late on this one (given the Vancouver Mayoral elections happened in October) - but a former candidate, Colleen Hardwick and her political party TEAM, campaigned on the promise of creating a commissioner for the Downtown Eastside of Vancouver. The proposed commissioner would have been responsible for auditing non-profit organizations serving the area and their use of public funding from municipal, provincial, and federal governments. They would have then made recommendations to the government based on their results. This is not something we’ve seen before and we hope it might signal a paradigm shift.

    Hardwick said “we need to start with a complete examination of which level of government is spending what money on which services – and then start looking at different and better ways to help people in need and improve the disastrous situation faced by far too much of our city.” 

    Despite the investment into non-profits over the past few years in the DTES, violent crime and mental health and addiction issues continue to escalate in the city. We liked seeing Hardwick continually reference the need for data to be used in making decisions about public policy (side note: she initiated the motion which added an independent auditor to Vancouver’s city hall). We’re curious to see how public servants can continue to push for transparency and accountability to their constituents.

  2. This article from the CBC examines the recent Auditor General’s (AG) report which found that the federal government doesn’t know if its actions are reducing chronic homelessness in Canada. It is deeply concerning to not know whether or not the efforts, time and money spent thus far have had any positive effects for people experiencing homelessness at all.
    The Canada Mortgage and Housing Corporation (CMHC) and Infrastructure Canada have failed to collect sufficient data about their programs, which were designed to connect the most vulnerable people in Canada with homes. Without data on program use and efficacy there is no way to know if the government is making any progress - or if they are causing more harm than good. Based on the limited data available, the AG found the number of shelter users who are chronically homeless has actually increased since the housing strategy was launched in 2016. This is concerning and highlights an issue with needing to develop infrastructure to measure the efficacy of programs alongside creating them. You can read more of the AG’s report here.

Jesse BournsPrototyping
Questioning the Methuselah Strategy

Leadership turnover in social sector tech projects.

Photo by Fabian Heimann on Unsplash

We recently reread this article by Sean Boots and it made us think about how it applies to our work. In his article, Shrink projects to fit leadership turnover rates, Boots addresses the dangers of overly-ambitious public-sector projects, where within the timeline of a single project, there may be multiple changes in leadership. As a result, responsibility for the project’s success (or failure) doesn’t reside with a single individual, but instead is shared across multiple leadership tenures. This creates a lack of individual accountability. Although the private sector does experience similar effects (see this article on CDO turnover), it doesn’t deal with turnover at the same level as the public sector. 

The nonprofit sector has similar issues as the public sector with leadership turnover and overly-ambitious projects, which spells trouble for our projects (and probably your projects too, dear reader). These projects require singular vision and leadership to succeed, so high turnover rates could result in risks. In the nonprofit sector, we should begin asking ourselves why we are planning for projects that are big enough to fail - and also try to ask ourselves why we expect them to succeed under these conditions. 

Boots offers us two potential ways the issue of project incompletion/failure could be addressed. First, reducing the number of stakeholders and oversight actors involved in a project to create more incentive to see things to completion (we like to call that “creating benevolent dictators” in our projects). The success or failure of a project reflects more on individuals when there are fewer people involved. Second, scaling massive projects down into smaller projects to fit more realistically within the turnover periods for senior leaders. Fitting project timelines within one or two leadership cycles instead of several can help us avoid the negative impacts of multiple leaders. Bonus: Boots didn’t mention this as a solution specifically, but we are also curious about how the public sector could learn from the private sector to actually retain more of our leaders. Again read: CDO turnover

Some things may seem impossible to complete in such a short period, but figuring out how to break a behemoth down into small, achievable, individually-useful parts can help save a project from ruination.

For further reading: there are some strategies in this fun article called “Don’t Build It” if you want to keep thinking about this stuff. 

Ajah FinancePrototyping
Failanthropic Story Hour: Ajah at TAG 2022

We went to TAG in San Antonio and hosted a fun event.

Photo by “No Revisions” on Unsplash

We're pretty big proponents of failure. It happens all the time, most things tend not to work out as planned. We think this is especially important to acknowledge, discuss, and learn from, in the social sector. Our failures should be identified quickly (and mercilessly) so that we can address them and incorporate their lessons into our work - especially the tech/data aspects to our work, which are even more fraught and complicated, making failure even more important to address in those contexts.

The philanthropic sector, traditionally, tends to avoid discussing failure - and even when we do, we tend to stick to abstract versions of it. So this year at TAG, we wanted to make a space for people to be able to safely talk about the failures they've experienced or witnessed while working in the philanthropic sector.

In other words, we wanted to spend some time complaining with some of our peers. So we did! While at TAG we brought together a group of folks who work in the philanthropy sector for an informal conversation (a Failanthropic happy hour) to vent, compare stories and commiserate together.

Despite the conference hotel unintentionally sabotaging the event by changing rooms, we pulled it off! We may or may not have had to steal some signs from a rival conference - but that is beside the point. It was really great to see a group of really smart people going around in a circle trying to identify the most egregious failure they'd witnessed and what exactly made it so awful. Having a group of people who could relate allowed us to collaborate and identify ways that the system could be fixed.

One of our favourite topics that emerged during the hour was how to offer advice to those who are new to the work. There is a lot of wisdom to be had in a room full of people who have seen the good, bad and ugly of the sector. We hope to do this again in the future at TAG and would love to see you all at the next Failanthropic story hour.

Ajah FinancePrototyping
What we're reading this week...

A round-up of articles that have caught our eye in the last few weeks

From UX Indonesia on Unsplash

Here are three things we read this week and what we’re thinking about them:

  1. This article on unlocking the potential of open 990 data is a fantastic success story if you’re into open data. If you’re paying attention to these developments, you might be aware that these datasets may not have been made available if not for the litigation of ur-Open Data activist Carl Malamud. The real work of creating value with this data has just begun. You can read more about this in an article we wrote about the data ecosystem for nonprofits in Canada. 

  2. This press release announcing the new White House Office of Science and Technology Policy guide. This move by the administration strives to address the inequality that exists as a result of paywalls on publications that contain research paid for by taxpayers. This new guide plans to eliminate the 12-month publication embargo for federally funded peer-reviewed research articles and make data published in peer-review research articles immediately available upon publication.

  3. This article on why funders should go meta addresses why philanthropies should focus on funding meta-issues like research and evaluation, along with efforts to improve research quality. The article argues that research, and the efforts to improve research, are undersupplied, and many opportunities go unnoticed. Spending money on research and development, or improving the process of both, is one way that philanthropy can make a difference.

Jesse BournsPrototyping
Where did everybody go?

Is volunteerism going up or down? Two reports tell two stories about volunteerism trends.

Photo by Adolfo Felix on Unsplash

Our partner, the Ontario Nonprofit Network (ONN), and l’Assemblée de la Francophonie de l’Ontario (l’AFO) recently shared their latest ‘State of the Sector’ survey on the current state of Ontario’s nonprofit landscape, receiving 1500 responses, nearly half the respondents from 2021. They identified two main areas the sector needs to address:

(i) Supply & demand: demand for the services provided by the nonprofit sector is higher than ever, while inflation and operating costs are rising, and the ability to recoup revenue decreases or stays the same.

(ii) Human resources: volunteers are not returning to organizations, posing a problem considering that almost half the organizations surveyed by the ONN are volunteer-run. This is exacerbated by the general labour shortage, as well as the sector’s own specific recruitment challenges, not being able to increase salaries or incentivize in other ways with a volunteer-based workforce.

We are especially curious about this latter point - why volunteers are not returning to the sector, and if this is related to the dip in survey respondents (the responses from the year before this were nearly double). If these two things are related, what else might they indicate about the ‘state of the sector’ (hint: burnout and fatigue)?

This made us think of a report compiled by another one of our partners, GivingTuesday, which provides insight into how the pandemic encouraged shifts towards giving back in less formalized ways (think things like mutual aid groups or picking up groceries for your elderly neighbours). These alternative forms of engaging in altruism haven’t been captured by this data. This ties into the bigger problem, the narrative of decreasing volunteers is being measured and published by the sector itself, which could be seen as having a bias. If people aren’t participating less in general, but only less within formal organizations, it would tell a different story than people behaving less altruistically overall.

Ajah FinancePrototyping
Too good to be true

A cautionary tale from academia

Photo by Jackson Simmer on Unsplash

We have a story from the sciences that we think is relevant to our work in the nonprofit sector. This summer Science magazine published the news of edited images in a 2006 study of Alzheimer’s disease. When the study was initially printed, it caused a significant stir in the medical community and created hope for the future of Alzheimer’s research. The news of these doctored images raises concern that millions of dollars and countless hours of research may have been wasted over the past 15 years. 

The issue here is not just that this happened; it is that it went unnoticed for so long. Even more shocking is that this occurred in a field where peer review is the standard. We usually think of science as the study of hard, immutable facts, but in reality, research is messier than one would think and these kinds of issues are not uncommon. Concerns about Lesne’s work were raised as early as 2013 on websites like PeerPub, but did not gain substantial attention. The validity of this landmark study flew under the radar for (insert clapping hands emoji here) nearly two decades (and here). This raises concerns about the standards and effectiveness of peer review but also illustrates the actual consequences of failures in that process.

So why is this relevant to the nonprofit sector? Despite being the 5th most cited article on Alzheimer's with 2,277 citations, there was glaring oversight of the study's validity not caught by peer review. In the nonprofit sector, we don't have the systems for assessing validity that they have in the sciences. There is no coordination within our sector to ensure that published data is thoroughly reviewed and flagged for inaccurate (or deceitful) findings. Without these kinds of systems in place, it is hard to know if erroneous conclusions and findings are slipping through the cracks. The failure in Alzheimer's research shows that even with these systems in place, not everything is caught - so we should be worried about what kinds of issues we *aren't* seeing without them in our sector. And, more importantly, we should be worried about the negative but invisible impacts those are having on the people we are meant to be helping with our services. 

Ajah FinancePrototyping
What we're reading this week...

A round-up of articles that have caught our eye in the last few weeks

From UX Indonesia on Unsplash

By Michael Lenczner and Jesse Bourns

  1. This article on the tenure of Chief Data Officers (CDOs) which looks at why the lifespan of CDO’s at a company tends to be so much shorter than their C-level peers. The author argues that a “failure to articulate from the very start how a CDO’s work differs from that of other executives” may be part of the cause. The non-profit sector tends to believe that the for-profit sector “knows how to do” data, so if we can adopt their approach, it will work out. The article makes it clear that this is not true - CDO’s are the people who supposedly have the recipe for driving things forward, but even for them it is still difficult to make things happen. 
    This reminds us of this Sean Boots article. If your leader is going to be gone by the time the project is completed, then they are not going to care as much about the project (that probably means the project won’t go well). 

  2. This article by Louise Adongo (warning, paywall) is not a normal thing we see talked about in the non-profit space. Rather than assuming the positive impact of projects, the article talks about the potential harm these projects may cause too. The rhetoric in the nonprofit sector about social R&D usually assumes either positive impact or at worst neutrality. The reality is that experimentation creates risk - which can mean harm, too.

  3. We lied. Contrary to the title of this post, we haven’t read this book yet (aren’t we cheeky). But we have ordered it and are super excited to delve into it. The authors of Digital Transformation at Scale have a lot of experience doing this work in complex social environments. Generally, digital transformation is done on a smaller scale (individual organization level) rather than at the level of multiple institutions and agencies (e.g, governments). We can’t wait to see their take on doing this work on such a large scale. 

Jesse BournsPrototyping
Addressing Decentralized Data in Government

Using old tools (DOI) for new tricks (simplifying data reporting).

Photo by FRANCK on Unsplash

By Tami Piovesan, at Ajah

We’ve been following the US National Secure Data Service Act introduced to Congress, and the momentum around the use of new permanent identifiers by research funders. We think it has interesting implications in the world of evidence use.

What is the National Secure Data Service Act?

The National Security Data Service Act (NSDS) is a piece of legislation that strives to improve how the federal government manages its data infrastructure. Currently the US federal government’s data infrastructure is largely decentralized - individual agencies and programs collect data independently, and are not required to share data (or re-use already collected data from other departments). As a result, the American public and businesses repeatedly report similar information across government. The NSDS would establish a system that requires government-wide data linkage and access to infrastructure for statistical activities. 

So really, what is the National Secure Data Act? 

The NSDS Act would address a major gap in the US government's data infrastructure by establishing a system for sharing, combining, and using data while maintaining privacy safeguards. If or when this legislation is approved, the NSDS would set up a secure infrastructure where government and non-goverment researchers could 1) submit proposed projects for approval; 2) link and access data for research and analysis; and 3) have project results privacy-protected before being prepared for publication. 

Why are we paying attention?

We’ve been following the entire Commision for Evidence-Based Policy-making and its related developments. Beyond any improvements to the existing approach to evidence development, we think that the Evidence Act could present new opportunities for the nonprofit sector to be a partner in evidence creation, not just evidence use.

Is grant identifying a thing? 

Over the past year academic funders have begun using the new capabilities of Crossref to assign unique IDs to individual grants. By registering grants, academic funders are able to track outputs connected to the research they support more easily and accurately. 

While registering a grant, funders give Crossref information regarding each grant and their unique DOI, which identifies their record - similar to the Dewey Decimal system in a library. By attaching a DOI to grants, Crossref facilitates finding publications and research that have resulted from grant funding, allowing for an overall better understanding of research impact. If you would like to learn more about how DOIs could change how we look at grants, check out the Whys and Hows of DOIs for Grants

Why this matters 

We’ve been thinking about using similar identifier approaches to the nonprofit sector’s activities for a long time. By leveraging the tools and data infrastructure used for scholarly publishing and communication, the nonprofit sector could make their data and research more accessible. To learn more about information infrastructure and the opportunities it presents, check out our paper Knowledge Sharing Infrastructure for the Nonprofit Sector.

 

Ajah FinancePrototyping