Support Us

You are browsing the archive for Open Research.

Open Access to Research Data: The European Commission’s consultation in progress

- July 9, 2013 in Featured, Open Access, Open Research

The European Commission held a public consultation on open access to research data on July 2 in Brussels inviting statements from researchers, industry, funders, IT and data centre professionals, publishers and libraries. The inputs of these stakeholders will play some role in revising the Commission’s policy and are particularly important for the ongoing negotiations on the next big EU research programme Horizon 2020, where about 25-30 billion Euros would be available for academic research. Five questions formed the basis of the discussion:

  • How we can define research data and what types of research data should be open?
  • When and how does openness need to be limited?
  • How should the issue of data re-use be addressed?
  • Where should research data be stored and made accessible?
  • How can we enhance “data awareness” and a “culture of sharing”?

Contributions from the researchers’ perspective emphasised that data, metadata and other documentation should be made available in order to be able to replicate the results of a research article and more data available means more scrutiny and getting more value out of the data. Furthermore, there is a need for pre-registration of studies in order to understand the full picture of a research field where e.g. negative results in the biomedical sciences (as well as many other fields) are not published. Then, this is also a need to have binding mechanisms e.g. required data management plans, better linkage between the research data and scientific publication with enforcement of data availability by journals, but also sustainable plans for making data available, where open access to data is formally a part of the research budget.

Searching and finding research data should be also made easier, as open access to data does not necessarily mean accessible data. There was also an emphasis that every contributor should be known and acknowledged and there is a need of establishing cultures around data sharing in different disciplines and “augmenting the scientific infrastructure to be technical, social and participatory” (Salvatore Mele, CERN).

There was some agreement that commercial data and data which can lead back to individuals should be kept closed but some aggregated data should be shared. Industry representatives (Philips Research, Federation of German Security and Defence Industries) argued for keeping some data closed, deciding on a case by case basis and having embargo periods on data produced in public-private partnerships in order to encourage investment.

Funders viewed research data as a public good, which should be managed and be discoverable, and encouraged open and better access to research data where research outputs are accessed and used in a way that maximises the public benefit. While there is a growing consensus about funder policies, these should be better implemented and enforced. Resources like – infrastructure, incentives and cultures, capacity and skills, ethics and governance – should be built and sustained in recognition of the different stages that different disciplines are currently at (some really good points made by David Carr, the Wellcome Trust).

The IT, data centre professionals and librarians spoke about the need to recognise the role of data scientists and data librarians, with appropriate funding and careers. While the value of data is often recognised later on and grows over time there is less of an understanding who would pay for the long-term preservation since few institutions can make indefinite commitments. A key component should be also proper training and the development of core skills in dealing with research data (where librarians can assist researchers in data management plans, bridging the gap in knowledge), as well as the proper citation rules and practices for data where career recognition can be linked to sharing of research data in order to boost incentives.

While the European Commission has been carrying the flag of open access, mandating open access to research publications funded by the last research and innovation programme FP7, there are larger hurdles on the road to open access to research data. While the EC’s communication “Towards better access to scientific information” reflects some commitment to open access to research data, there are many exceptions, e.g. privacy, trade secrets, national security, legitimate commercial interest, intellectual property, data resulting from a public-private partnership, etc. As Mireille van Echoud, professor of Information Law at IViR, stated at the Open Economics workshop in June, “any lawyer will find whatever argument they need to keep data from falling under an open access obligation”.

Look at some more detailed notes from Ian Mulvany and his presentation on the behalf of several publishers.

First Opinion series on Transparency in Social Science Research

- June 7, 2013 in Berkeley Initiative for Transparency in the Social Sciences (BITSS), External Projects, Featured, Open Data, Open Economics, Open Research

The Berkeley Initiative for Transparency in the Social Sciences (BITSS) is a new effort to promote transparency in empirical social science research. The program is fostering an active network of social science researchers and institutions committed to strengthening scientific integrity in economics, political science, behavioral science, and related disciplines.

Central to the BITSS effort is the identification of useful strategies and tools for maintaining research transparency, including the use of study registries, pre-analysis plans, data sharing, and replication. With its institutuional hub at UC Berkeley, the network facilitates discussion and critique of existing strategies, testing of new methods, and broad dissemination of findings through interdisciplinary convenings, special conference sessions, and online public engagement.

The first opinion series on transparency in social science research (see: http://cegablog.org/transparency-series/) was published on the CEGA Development Blog in March 2013. The series built on a seminal research meeting held at the University of California, Berkeley on December 7, 2012, which brought together a select interdisciplinary group of scholars – from biostatistics, economics, political science and psychology – with a shared interest in promoting transparency in empirical social science research.

Making Data Count and the Value of Research Data

- May 29, 2013 in Featured, Open Research

Last month in Berlin, the Knowledge Exchange gathered around 80 representatives from funder agencies, research institutions, universities and scholarly societies in the Making Data Count workshop with the aim “to discuss and build on possibilities to implement the culture of sharing and to integrate publication of data into research assessment procedures.”


The report “The Value of Research Data: Metrics for datasets from a cultural and technical point of view”, which was presented during the workshop argued that while data sharing between scientists in not a common practice, the development of data metrics should serve as one of the incentives for researchers, being incorporated in the professional and career reward structures and making data more visible and establishing a better practice of data citation and data re-use.

Some of the conclusions of the report also emphasise that data sharing has many important functions. One of them is serving as “a potential source for scientific recognition”, where the creation and curation of datasets may be seen an important contribution to be considered in promotions and the allocation of research funding. Another function of making data openly available to the research community is providing the possibility to verify and reproduce research findings as part of good scientific practice, “protecting against fraud and faulty data”.

Additionally, data sharing allows a more efficient use of research resources where repeated collection of data is avoided and new opportunities emerge for the re-use of the data and for new scientific collaborations. Data sharing is also mentioned as tool enabling new research agendas, international research collaborations and interdisciplinary research. Then, the availability of research data provides training material and supports the work of educators.

The report also discusses the current data metrics models, the opportunities and limitations of data publications, which the authors point out as the most developed model of all. The recommendations include bringing down the costs of data publications and making the process more efficient, incorporating data metrics in the scholarly award structures, reducing the dispersion of data repositories, developing standards and interoperability protocols across the different actors, etc.

The report was written by Rodrigo Costas, Ingeborg Meijer, Zohreh Zahedi and Paul Wouters of Leiden University. Read the report

Open Access Economics: To share or not to share?

- May 22, 2013 in Featured, Open Access, Open Data, Open Economics, Open Research

Last Friday, Barry Eichengreen, professor of Economics and Political Science at Berkeley, wrote about “Open Access Economics” at the prestigious commentary, analysis and opinion page Project Syndicate, where influential professionals, politicians, economists, business leaders and Nobel laureates share opinions about current economic and political issues.

He reaffirmed that indeed the results of the Reinhart and Rogoff study were used by some politicians to justify austerity measures taken by governments around the world with stifling public debt.

Professor Eichengreen also criticised the National Bureau of Economic Research (NBER) for failing to require data and code for the “flawed study” of the Harvard economists, which appeared first in the distinguished working paper series of NBER.

In line with the discussion we started at the LSE Social Impact Blog and the New Scientist, Barry Eichengreen brought home the message that indeed the enforcement of a data availability would have made a difference in this case.

At the same time, some express doubts about the need to share data and think about excuses to avoid sharing the data related to their publication. Economists at the anonymous web forum Econjobrumors.com have been joking about the best ways to avoid sharing data.

Here are some of “creative” suggestions on how the anonymous author could get around sending their data:

“Refer him to your press secretary”
“Tell him you had a computer virus that wiped out the dataset”
“Not obliged to let anyone free ride. Can you explain it like that?”
“Tell him its proprietary data and you can’t share it without having to kill him.”
“Tell him, ‘I’ll show you mine if you show me yours.”
“…say you signed NDA.”
“Huddle in the corner of your office wrapped in a blanket and some hot coco from the machine down the hall and wait for the inevitable.”
“Don’t reply.”

Anonymous author: “No, did not make up the results. But let’s just say you really do not want to play with the data in any way. No good for significance.”
Anonymous comment: “Added a couple of extra stars for good luck?”.

While many of the discussions on the anonymous blog are employing humour and jokes, this discussion reflects a mainstream attitude towards data sharing. It also shows how uncertain are some authors of the robustness of their results – even if they did not make any Reinhart and Rogoff excel mistakes, they are hesitating about sharing lest closer scrutiny would expose weaker methodology. Maybe more disclosure – there data can be shared – could improve the way research is done.

Securing the Knowledge Foundations of Innovation

- May 15, 2013 in Advisory Panel, Featured, Open Access, Open Data, Open Research

Last month, Paul David, professor of Economics at Stanford University, Senior Fellow of the Stanford Institute for Economic Policy Research (SIEPR) and a member of the Advisory Panel delivered a keynote presentation at the International Seminar of the PROPICE in Paris.

Professor David expresses concern that the increased use of intellectual property rights (IPR) protections “has posed problems for open collaborative scientific research” and that the IPR regime has been used by businesses e.g. to “raise commercial rivals’ costs”, where empirical evidence shows has shown that business innovation is “is being inhibited by patent thickets”.

In describing the anti-commons issue, professor David also pointed out that research databases are likely sites for problems and emphasised the importance of protecting the future open access to critical data.

Also, high quality data would be very costly, where “…strengthening researchers’ incentives to create transparent, fully documented and dynamically annotated datasets to be used by others remains an insufficiently addressed problem”.

Read the whole presentation below:


Automated Game Play Datasets: New Releases

- April 24, 2013 in Announcements, Data Release, Featured, Open Data, Open Economics, Open Research

Last month we released ten datasets from the research project “Small Artificial Human Agents for Virtual Economies“, implemented by Professor David Levine and Professor Yixin Chen at the Washington University of St. Louis and funded by the National Science Foundation [See dedicated webpage].

We are now happy to announce that the list has grown with seven more datasets, now hosted at datahub.io which were added this month, including:


Clark, K. & Sefton, M., 2001. Repetition and signalling: experimental evidence from games with efficient equilibria. Economics Letters, 70(3), pp.357–362.

Link to publication | Link to data
Costa-Gomes, M. and Crawford, V. 2006. “Cognition and Behavior in Two-Person guessing Games: An Experimental Study.” The American Economic Review, 96(5), pp.1737-1768

Link to publication | Link to data
Costa-Gomes, M., Crawford, V. and Bruno Broseta. 2001. “Cognition and Behavior in Normal-Form Games: An Experimental Study.” Econometrica, 69(5), pp.1193-1235

Link to publication | Link to data
Crawford, V., Gneezy, U. and Yuval Rottenstreich. 2008. “The Power of Focal Points is Limited: Even Minute Payoff Asymmetry May Yield Large Coordination Failures.” The American Economic Review, 98(4), pp.1443-1458

Link to publication | Link to data
Feltovich, N., Iwasaki, A. and Oda, S., 2012. Payoff levels, loss avoidance, and equilibrium selection in games with multiple equilibria: an experimental study. Economic Inquiry, 50(4), pp.932-952.

Link to publication | Link to data
Feltovich, N., & Oda, S., 2013. The effect of matching mechanism on learning in games played under limited information, Working paper

Link to publication | Link to data
Schmidt D., Shupp R., Walker J.M., and Ostrom E. 2003. “Playing Safe in Coordination Games: The Roles of Risk Dominance, Payoff Dominance, and History of Play.” Games and Economic Behaviour, 42(2), pp.281–299.

Link to publication | Link to data

Any questions or comments? Please get in touch: economics [at] okfn.org

Reinhart-Rogoff Revisited: Why we need open data in economics

- April 18, 2013 in Featured, Open Data, Open Economics, Open Research, Public Finance and Government Data

Another economics scandal made the news this week. Harvard Kennedy School professor Carmen Reinhart and Harvard University professor Kenneth Rogoff argued in their 2010 NBER paper that economic growth slows down when the debt/GDP ratio exceeds the threshold of 90 percent of GDP. These results were also published in one of the most prestigious economics journals – the American Economic Review (AER) – and had a powerful resonance in a period of serious economic and public policy turmoil when governments around the world slashed spending in order to decrease the public deficit and stimulate economic growth.

Carmen Reinhart

Kenneth Rogoff

Yet, they were proven wrong. Thomas Herndon, Michael Ash and Robert Pollin from the University of Massachusetts (UMass) tried to replicate the results of Reinhart and Rogoff and criticised them on the basis of three reasons:

  • Coding errors: due to a spreadsheet error five countries were excluded completely from the sample resulting in significant error of the average real GDP growth and the debt/GDP ratio in several categories
  • Selective exclusion of available data and data gaps: Reinhart and Rogoff exclude Australia (1946-1950), New Zealand (1946-1949) and Canada (1946-1950). This exclusion is alone responsible for a significant reduction of the estimated real GDP growth in the highest public debt/GDP category
  • Unconventional weighting of summary statistics: the authors do not discuss their decision to weight equally by country rather than by country-year, which could be arbitrary and ignores the issue of serial correlation.

The implications of these results are that countries with high levels of public debt experience only “modestly diminished” average GDP growth rates and as the UMass authors show there is a wide range of GDP growth performances at every level of public debt among the twenty advanced economies in the survey of Reinhart and Rogoff. Even if the negative trend is still visible in the results of the UMass researchers, the data fits the trend very poorly: “low debt and poor growth, and high debt and strong growth, are both reasonably common outcomes.”

Source: Herndon, T., Ash, M. & Pollin, R., “Does High Public Debt Consistently Stifle Economic Growth? A Critique of Reinhart and Rogoff, Public Economy Research Institute at University of Massachusetts: Amherst Working Paper Series. April 2013.

What makes it even more compelling news is that it is all a tale from the state of Massachusetts: distinguished Harvard professors (#1 university in the US) challenged by empiricists from the less known UMAss (#97 university in the US). Then despite the excellent AER data availability policy – which acts as a role model for other journals in economics – has failed to enforce it and make the data and code of Reinhart and Rogoff available to other researchers.

Coding errors happen, yet the greater research misconduct was not allowing for other researchers to review and replicate the results through making the data openly available. If the data and code were available upon publication already in 2010, it may not have taken three years to prove these results wrong, which may have probably influenced the direction of public policy around the world towards stricter austerity measures. Sharing research data means a possibility to replicate and discuss, enabling the scrutiny of research findings as well as improvement and validation of research methods through more scientific enquiry and debate.

Get in Touch

The Open Economics Working Group advocates the release of datasets and code along with published academic articles and provides practical assistance to researchers who would like to do so. Get in touch if you would like to learn more by writing us at economics [at] okfn.org and signing for our mailing list.

References

Herndon, T., Ash, M. & Pollin, R., “Does High Public Debt Consistently Stifle Economic Growth? A Critique of Reinhart and Rogoff, Public Economy Research Institute at University of Massachusetts: Amherst Working Paper Series. April 2013: Link to paper |
Link to data and code

Open Research Data Handbook – Call for Case Studies

- April 9, 2013 in Announcements, Call for participation, Featured, Open Data, Open Economics, Open Research

The OKF Open Research Data Handbook – a collaborative and volunteer-led guide to Open Research Data practices – is beginning to take shape and we need you! We’re looking for case studies showing benefits from open research data: either researchers who have personal stories to share or people with relevant expertise willing to write short sections.

Designed to provide an introduction to open research data, we’re looking to develop a resource that will explain what open research data actually is, the benefits of opening up research data, as well as the processes and tools which researchers need to do so, giving examples from different academic disciplines.

Leading on from a couple of sprints, a few of us are in the process of collating the first few chapters, and we’ll be asking for comment on these soon.

In the meantime, please provide us with case studies to include, or let us know if you are willing to contribute areas of expertise to this handbook.

i want you

We now need your help to gather concrete case studies which detail your experiences of working with Open Research Data. Specifically, we are looking for:

  • Stories of the benefits you have seen as a result of open research data practices
  • Challenges you have faced in open research, and how you overcame them
  • Case studies of tools you have used to share research data or to make it openly available
  • Examples of how failing to follow open research practices has hindered the progress of science, economics, social science, etc.
  • … More ideas from you!

Case studies should be around 200-500 words long. They should be concrete, based on real experiences, and should focus on one specific angle of open research data (you can submit more than one study!).

Please fill out the following form in order to submit a case study:

Link to form

If you have any questions, please contact us on researchhandbook [at] okfn.org

Releasing the Automated Game Play Datasets

- March 7, 2013 in Announcements, Data Release, Featured, Open Data, Open Economics, Open Research, Open Tools

We are very happy to announce that the Open Economics Working Group is releasing the datasets of the research project “Small Artificial Human Agents for Virtual Economies“, implemented by Professor David Levine and Professor Yixin Chen at the Washington University of St. Louis and funded by the National Science Foundation [See dedicated webpage].

The authors who have participated in the study have given their permission to publish their data online. We hope that through making this data available online we will aid researchers working in this field. This initiative is motivated by our belief that in order for economic research to be reliable and trusted, it should be possible to reproduce research findings – which is difficult or even impossible without the availability of the data and code. Making material openly available reduces to a minimum the barriers for doing reproducible research.

If you are interested to know more or you like to get help in releasing research data in your field, please contact us at: economics [at] okfn.org

List of Datasets and Code

Andreoni, J. & Miller, J.H., 1993. Rational cooperation in the finitely repeated prisoner’s dilemma: Experimental evidence. The Economic Journal, pp.570–585.

Link to publication | Link to data
Bó, P.D., 2005. Cooperation under the shadow of the future: experimental evidence from infinitely repeated games. The American Economic Review, 95(5), pp.1591–1604.


Link to publication
| Link to data

Charness, G., Frechette, G.R. & Qin, C.-Z., 2007. Endogenous transfers in the Prisoner’s Dilemma game: An experimental test of cooperation and coordination. Games and Economic Behavior, 60(2), pp.287–306.

Link to publication | Link to data
Clark, K., Kay, S. & Sefton, M., 2001. When are Nash equilibria self-enforcing? An experimental analysis. International Journal of Game Theory, 29(4), pp.495–515.

Link to publication | Link to data
Duffy, John and Feltovich, N., 2002. Do Actions Speak Louder Than Words? An Experimental Comparison of Observation and Cheap Talk. Games and Economic Behavior, 39(1), pp.1–27.

Link to publication | Link to data
Duffy, J. & Ochs, J., 2009. Cooperative behavior and the frequency of social interaction. Games and Economic Behavior, 66(2), pp.785–812.

Link to publication
| Link to data
Knez, M. & Camerer, C., 2000. Increasing cooperation in prisoner’s dilemmas by establishing a precedent of efficiency in coordination games. Organizational Behavior and Human Decision Processes, 82(2), pp.194–216.

Link to publication
| Link to data
Ochs, J., 1995. Games with unique, mixed strategy equilibria: An experimental study. Games and Economic Behavior, 10(1), pp.202–217.

Link to publication | Link to data
Ong, D. & Chen, Z., 2012. Tiger Women: An All-Pay Auction Experiment on Gender Signaling of Desire to Win. Available at SSRN 1976782.

Link to publication | Link to data
Vlaev, I. & Chater, N., 2006. Game relativity: How context influences strategic decision making. Journal of Experimental Psychology: Learning, Memory, and Cognition; Journal of Experimental Psychology: Learning, Memory, and Cognition, 32(1), p.131.

Link to publication | Link to data

Project Background

An important need for developing better economic policy prescriptions is an improved method of validating theories. Originally economics depended on field data from surveys and laboratory experiments. An alternative method of validating theories is through the use of artificial or virtual economies. If a virtual world is an adequate description of a real economy, then a good economic theory ought to be able to predict outcomes in that setting. An artificial environment offers enormous advantages over the field and laboratory: complete control – for example, over risk aversion and social preferences – and great speed in creating economies and validating theories. In economics the use of virtual economies can potentially enable us to deal with heterogeneity, with small frictions, and with expectations that are backward looking rather than determined in equilibrium. These are difficult or impractical to combine in existing calibrations or Monte Carlo simulations.

The goal of this project is to build artificial agents by developing computer programs that act like human beings in the laboratory. We focus on the simplest type of problem of interest to economists: the simple one-shot two-player simultaneous move games. There is a wide variety of existing published data on laboratory behavior that will be our primary testing ground for our computer programs. As we achieve greater success with this we want to see if our programs can adapt themselves to changes in the rules: for example, if payments are changed in a certain way, the computer programs will play differently: do people do the same? In some cases we may be able to answer these questions with data from existing studies; in others we will need to conduct our own experimental studies.

There is a great deal of existing research relevant to the current project. The state of the art in the study of virtual economies is agent-based modeling (Bonabeau (2002)). In addition, crucially related are both the theoretical literature on learning in games, and the empirical literature on behavior in the experimental laboratory. From the perspective of theory, the most relevant economic research is Foster and Vohra’s (1999) work on calibrated play and the related work on smooth fictitious play (Fudenberg and Levine (1998)) and regret algorithms (Hart and Mas-Colell (2000)). There is also a relevant literature in the computational game theory literature on regret optimization such as Nisan et al. (2007). Empirical work on human play in the laboratory has two basic threads: the research on first time play such as Nagel (1995) and the hierarchical models of Stahl and Wilson (1994), Costa-Gomes, Crawford, and Broseta (2001) and Camerer, Ho, and Chong (2004). Second are the learning models, most notably the reinforcement learning model of Erev and Roth (1998) and the EWA model (Ho, Camerer, and Chong (2007)). This latter model can be considered state of the art, including as it does both reinforcement and fictitious play type learning and initial play from a cognitive hierarchy.

Preregistration in the Social Sciences: A Controversy and Available Resources

- February 20, 2013 in Featured, Open Data, Open Economics, Open Research, Open Tools

For years now, the practice preregistering clinical trials has worked to reduce publication bias dramatically (Drummond Rennie offers more details). Trying to build on this trend for transparency, the Open Knowledge Foundation, which runs the Open Economics Working Group, has expressed support for All Trials Registered, All Results Reported (http://www.alltrials.net). This initiative argues that all clinical trial results should be reported because the spread of this free information will reduce bad treatment decisions in the future and allow others to find missed opportunities for good treatments. The idea of preregistration, therefore, has proved valuable for the medical profession.

In a similar push for openness, a debate now is emerging about the merits of preregistration in the social sciences. Specifically, could social scientific disciplines benefit from investigators’ committing themselves to a research design before the observation of their outcome variable? The winter 2013 issue of Political Analysis takes up this issue with a symposium on research registration, wherein two articles make a case in favor of preregistration, and three responses offer alternate views on this controversy.

There has been a trend for transparency in social research: Many journals now require authors to release public replication data as a condition for publication. Additionally, public funding agencies such as the U.S. National Science Foundation require public release of data as a condition for funding. This push for additional transparency allows for other researchers to conduct secondary analyses that may build on past results and also allows empirical findings to be subjected to scrutiny as new theory, data, and methods emerge. Preregistering a research design is a natural next step in this transparency process as it would allow readers, including other scholars, to gain a sense of how the project was developed and how the researcher made tough design choices.

Another advantage of preregistering a research design is it can curb the prospects of publication bias. Gerber & Malhotra observe that papers produced in print tend to have a higher rate of positive results in hypothesis tests than should be expected. Registration has the potential to curb publication bias, or at least its negative consequences. Even if committing oneself to a research design does not change the prospect for publishing an article in the traditional format, it would signal to the larger audience that a study was developed and that a publication never emerged. This would allow the scholarly community at large to investigate further, perhaps reanalyze data that were not published in print, and if nothing else get a sense of how preponderant null findings are for commonly-tested hypotheses. Also, if more researchers tie their hands in a registration phase, then there is less room for activities that might push a result over a common significance threshold.

To illustrate how preregistration can be useful, my article in this issue of Political Analysis analyzes the effect of Republican candidates’ position on the immigration issue on their share of the two-party vote in 2010 elections for the U.S. House of Representatives. In this analysis, I hypothesized that Republican candidates may have been able to garner additional electoral support by taking a harsh stand on the issue. I designed my model to estimate the effect on vote share of taking a harsher stand on immigration, holding the propensity of taking a harsh stand constant. This propensity was based on other factors known to shape election outcomes, such as district ideology, incumbency, campaign finances, and previous vote share. I crafted my design before votes were counted in the 2010 election and publicly posted it to the Society for Political Methodology’s website as a way of committing myself to this design.

immigComparison

In the figure, the horizontal axis represents values that the propensity scores for harsh rhetoric could take. The tick marks along the base of the graph indicate actual values in the data of the propensity for harsh rhetoric. The vertical axis represents the expected change in proportion of the two party vote that would be expected for moving from a welcoming position to a hostile position. The figure shows a solid black line, which indicates my estimate of the effect of a Republican’s taking a harsh stand on immigration on his or her proportion of the two-party vote. The two dashed black lines indicate the uncertainty in this estimate of the treatment effect. As can be seen, the estimated effects come with considerable uncertainty, and I can never reject the prospect of a zero effect.

However, a determined researcher could have tried alternate specifications until a discernible result emerged. The figure also shows a red line representing the estimated treatment effect from a simpler model that also omits the effect of how liberal or conservative the district is. The dotted red lines represent the uncertainty in this estimate. As can be seen, this reports a uniform treatment effect of 0.079 that is discernible from zero. After “fishing” with the model specification, a researcher could have manufactured a result suggesting that Republican candidates could boost their share of the vote by 7.9 percentage points by moving from a welcoming to a hostile stand on immigration! Such a result would be misleading because it overlooks district ideology. Whenever investigators commit themselves to a research design, this reduces the prospect of fishing after observing the outcome variable.

I hope to have illustrated the usefulness of preregistration and hope the idea will spread. Currently, though, there is not a comprehensive study registry in the social sciences. However, several proto-registries are available to researchers. All of these registries offer the opportunity for self-registration, wherein the scholar can commit him or herself to a design as a later signal to readers, reviewers, and editors.

In particular, any researcher from any discipline who is interested in self-registering a study is welcome to take advantage of the Political Science Registered Studies Dataverse. This dataverse is a fully-automated resource that allows researchers to upload design information, pre-outcome data, and any preliminary code. Uploaded designs will be publicized via a variety of free media. List members are welcome to subscribe to any of these announcement services, which are linked in the header of the dataverse page.

Besides this automated system, there are also a few other proto-registries of note:

* The EGAP: Experiments in Governance and Politics (http://e-gap.org/design-registration/) website has a registration tool that now accepts and posts detailed preanalysis plans. In instances when designs are sensitive, EGAP offers the service of accepting and archiving sensitive plans with an agreed trigger for posting them publicly.

* J-PAL: The Abdul Latif Jameel Poverty Action Lab (http://www.povertyactionlab.org/Hypothesis-Registry) has been hosting a hypothesis registry since 2009. This registry is for pre-analysis plans of researchers working on randomized controlled trials, which may be submitted before data analysis begins.

* The American Political Science Association’s Experimental Research Section (http://ps-experiments.ucr.edu/) hosts a registry for experiments at its website. (Please note, however, that the website currently is down for maintenance.)