Support Us

You are browsing the archive for Announcements.

Open Economics: the story so far…

- August 30, 2013 in Advisory Panel, Announcements, Events, Featured, Open Data, Open Economics, Projects

A year and a half ago we embarked on the Open Economics project with the support of the Alfred P. Sloan Foundation and we would like a to share a short recap of what we have been up to.

Our goal was to define what open data means for the economics profession and to become a central point of reference for those who wanted to learn what it means to have openness, transparency and open access to data in economics.

Advisory Panel of the Open Economics Working Group:
openeconomics.net/advisory-panel/

Advisory Panel

We brought together an Advisory Panel of twenty senior academics who advised us and provided input on people and projects we needed to contact and issues we needed to tackle. The progress of the project has depended on the valuable support of the Advisory Panel.

1st Open Economics Workshop, Dec 17-18 ’12, Cambridge, UK:
openeconomics.net/workshop-dec-2012/

2nd Open Economics Workshop, 11-12 June ’13, Cambridge, MA:
openeconomics.net/workshop-june-2013

International Workshops

We also organised two international workshops, first one held in Cambridge, UK on 17-18 December 2012 and second one in Cambridge U.S. on 11-12 June 2013, convening academics, funders, data publishers, information professionals and students to share ideas and build an understanding about the value of open data, the still persisting barriers to opening up information, as well as the incentives and structures which our community should encourage.

Open Economics Principles

While defining open data for economics, we also saw the need to issue a statement on the openness of data and code – the Open Economics Principles – to emphasise that data, program code, metadata and instructions, which are necessary to replicate economics research should be open by default. Having been launched in August, this statement is now being widely endorsed by the economics community and most recently by the World Bank’s Data Development Group.

Projects

The Open Economics Working Group and several more involved members have worked on smaller projects to showcase how data can be made available and what tools can be built to encourage discussions and participation as well as wider understanding about economics. We built the award-winning app Yourtopia Italy – http://italia.yourtopia.net/ for a user-defined multidimensional index of social progress, which won a special prize in the Apps4Italy competition.




Yourtopia Italy: application of a user-defined multidimensional index of social progress: italia.yourtopia.net

We created the Failed Bank Tracker, a list and a timeline visualisation of the banks in Europe which failed during the last financial crisis and released the Automated Game Play Datasets, the data and code of papers from the Small Artificial Agents for Virtual Economies research project, implemented by Professor David Levine and Professor Yixin Chen at the Washington University of St. Louis. More recently we launched the Metametrik prototype of a platform for the storage and search of regression results in the economics.


MetaMetrik: a prototype for the storage and search of econometric results: metametrik.openeconomics.net

We also organised several events in London and a topic stream about open knowledge and sustainability at the OKFestival with a panel bringing together a diverse range of panelists from academia, policy and the open data community to discuss how open data and technology can help improve the measurement of social progress.

Blog and Knowledge Base

We blogged about issues like the benefits of open data from the perspective of economics research, the EDaWaX survey of the data availability of economics journals, pre-registration of in the social sciences, crowd-funding as well as open access. We also presented projects like the Statistical Memory of Brazil, Quandl, the AEA randomized controlled trials registry.

Some of the issues we raised had a wider resonance, e.g. when Thomas Herndon found significant errors in trying to replicate the results of Harvard economists Reinhart and Rogoff, we emphasised that while such errors may happen, it is a greater crime not to make the data available with published research in order to allow for replication.

Some outcomes and expectations

We found that opening up data in economics may be a difficult matter, as many economists utilise data which cannot be open because of privacy, confidentiality or because they don’t own that data. Sometimes there are insufficient incentives to disclose data and code. Many economists spend a lot of resources in order to build their datasets and obtain an advantage over other researchers by making use of information rents.

Some journals have been leading the way in putting in place data availability requirements and funders have been demanding data management and sharing plans, yet more general implementation and enforcement is still lacking. There are now, however, more tools and platforms available where researchers can store and share their research content, including data and code.

There are also great benefits in sharing economics data: it enables the scrutiny of research findings and gives a possibility to replicate research, it enhances the visibility of research and promotes new uses of the data, avoids unnecessary costs for data collection, etc.

In the future we hope to concentrate on projects which would involve graduate students and early career professionals, a generation of economics researchers for whom sharing data and code may become more natural.

Keep in touch

Follow us on Twitter @okfnecon, sign up to the Open Economics mailing list and browse our projects and resources at openeconomics.net.

Introducing the Open Economics Principles

- August 7, 2013 in Announcements, Featured

The Open Economics Working Group would like to introduce the Open Economics Principles, a Statement on Openness of Economic Data and Code. A year and a half ago the Open Economics project began with a mission of becoming central point of reference and support for those interested in open economic data. In the process of identifying examples and ongoing barriers for opening up data and code for the economics profession, we saw the need to present a statement on the guiding principles of transparency and accountability in economics that would enable replication and scholarly debate as well as access to knowledge as a public good.

We wrote the Statement on the Open Economics Principles during our First and Second Open Economics International Workshops, receiving feedback from our Advisory Panel and community with the aim to emphasise the importance of having open access to data and code by default and address some of the issues around the roles of researchers, journal editors, funders and information professionals.

Second Open Economics International Workshop, June 11-12, 2013

Second Open Economics International Workshop, June 11-12, 2013

Read the statement below and follow this link to endorse the Principles.


Open Economics Principles

Statement on Openness of Economic Data and Code

Economic research is based on building on, reusing and openly criticising the published body of economic knowledge. Furthermore, empirical economic research and data play a central role for policy-making in many important areas of our
economies and societies.

Openness enables and underpins scholarly enquiry and debate, and is crucial in ensuring the reproducibility of economic research and analysis. Thus, for economics to function effectively, and for society to reap the full benefits from economic research, it is therefore essential that economic research results, data and analysis be openly and freely available, wherever possible.

  1. Open by default: by default data in its different stages and formats, program code, experimental instructions and metadata – all of the evidence used by economists to support underlying claims – should be open as per the Open Definition1, free for anyone to use, reuse and redistribute. Specifically open material should be publicly available and licensed with an appropriate open licence2.
  2. Privacy and confidentiality: We recognise that there are often cases where for reasons of privacy, national security and commercial confidentiality the full data cannot be made openly available. In such cases researchers should share analysis under the least restrictive terms consistent with legal requirements, abiding by the research ethics and guidelines of their community. This should include opening up non-sensitive data, summary data, metadata and code, and facilitating access if the owner of the original data grants other researchers permission to use the data
  3. Reward structures and data citation: recognizing the importance of data and code to the discipline, reward structures should be established in order to recognise these scholarly contributions with appropriate credit and citation in an acknowledgement that producing data and code with the documentation that make them reusable by others requires a significant commitment of time and resources. At minimum, all data necessary to understand, assess, or extend conclusions in scholarly work should be cited. Acknowledgements of research funding, traditionally limited to publications, could be extended to research data and contribution of data curators should be recognised.
  4. Data availability: Investigators should share their data by the time of publication of initial results of analyses of the data, except in compelling circumstances. Data relevant to public policy should be shared as quickly and widely as possible. Funders, journals and their editorial boards should put in place and enforce data availability policies requiring data, code and any other relevant information to be made openly available as soon as possible and at latest upon publication. Data should be in a machine-readable format, with well-documented instructions, and distributed through institutions that have demonstrated the capability to provide long-term stewardship and access. This will enable other researchers to replicate empirical results.
  5. Publicly funded data should be open: publicly funded research work that generates or uses data should ensure that the data is open, free to use, reuse and redistribute under an open licence – and specifically, it should not be kept unavailable or sold under a proprietary licence. Funding agencies and organizations disbursing public funds have a central role to play and should establish policies and mandates that support these principles, including appropriate costs for long-term data availability in the funding of research and the evaluation of such policies3, and independent funding for systematic evaluation of open data policies and use.
  6. Usable and discoverable: as simply making data available may not be sufficient for reusing it, data publishers and repository managers should endeavour to also make the data usable and discoverable by others; for example, documentation, the use of standard code lists, etc., all help make data more interoperable and reusable and submission of the data to standard registries and of common metadata enable greater discoverability.

See Reasons and Background: http://openeconomics.net/principles/.

Endorse the Principles

I endorse the Open Economics Principles

[signature]

Endorse
306 signatures

Share this:

   


1. http://opendefinition.org/

2. Open licenses for code are those conformant with the Open Source Definition see http://opensource.org/licenses and open licenses for data should be conformant with the open definition, see http://opendefinition.org/licenses/#Data.

3. A good example of an important positive developments in this direction from the United States is http://www.whitehouse.gov/sites/default/files/microsites/ostp/ostp_public_access_memo_2013.pdf

Looking for the Next Open Economics Project Coordinator

- July 3, 2013 in Announcements, Featured, Open Economics

### Open Economics Project Coordinator

The Open Economics Working Group is looking for a project coordinator to lead the Open Economics project in the next phase. The Open Economics Project Coordinator will be the point of contact for the Working Group and will work closely with a community of economists, data publishers, research data professionals, lawyers and funders to make more data and content in economics open, coordinate the creation of tools which aid researchers and facilitate stakeholder dialogue. Some of the responsibilities include:

  • Coordinating the project through all phases of project development including initiating, planning, executing, controlling and closing the project.
  • Representing Open Economics Working Group at local and international events, point of contact for the Working Group.
  • Leading communications: Responsible for communications with the Working Group members, the community, interested individuals and organisations, point of contact for the project PI and the Advisory Panel arranging the details of conference calls and leading communication with individual AP members and their participation in the workshop and other activities.
  • Community coordinator: Writing news to the mailing list, and using social media to promote activities to the network and beyond, maintaining the website of the Open Economics project: planning design, content and presentation of the project and the Working Group, organising and coordinating online meetings / online sprints and other online communication.
  • Maintaining the website: Inviting and supervising contributions to the blog, actively searching authors, setting agenda for presented content and projects, blog author: putting together content for the blog: surveying relevant projects, publishing news about forthcoming events and documentation (slides, audios, summary) of past events and activities
  • Point of contact for the project, responsible for collaboration and communication to other projects within the Open Knowledge Foundation.
  • Preparing reports: Writing both financial and substantive midterm and final report for the funder as well as weekly reports for the project team.
  • Point of contact and support for the Open Economics fellows: Planning and supervising the recruitment process of the fellows, maintaining regular contact with the fellows, monitoring progress of the fellows’ projects and providing necessary support.
  • Event development and management: concept, planning, research on invitees and relevant projects, programme drafting, sending and following up on invitations, event budgeting, organising the entire event.

#### Person specification

  • Someone self-driven, organised and an excellent communicator. This person should be comfortable running a number of initiatives at the same time, speaking at events and travelling.
  • Having background in economics and knowledge of quantitative research and data analysis.
  • Preferably some knowledge of academic research and some familiarity with stakeholders in the area of economics research.
  • Be comfortable with using online communication and working from different locations.
  • Having ability to engage with community members at all levels – from senior academics to policy-makers, developers, and journalists.
  • #### Location
    We will consider applicants based anywhere in the world; however a mild preference is given to those close to one of our hubs in London, Berlin or Cambridge.

    #### Pay & closing date
    The rate is negotiable based on experience. The closing date for applications is July 15, 2013.

    ####How to apply
    To apply please send a cover letter highlighting relevant experience, your CV and explaining your interest in the role to [email protected]

    Second Open Economics International Workshop

    - June 5, 2013 in Announcements, Events, Featured, Open Data, Open Economics, Workshop

    Next week, on June 11-12, at the MIT Sloan School of Management, the Open Economics Working Group of the Open Knowledge Foundation will gather about 40 economics professors, social scientists, research data professionals, funders, publishers and journal editors for the second Open Economics International Workshop.

    The event will follow up on the first workshop held in Cambridge UK and will conclude with agreeing a statement on the Open Economics principles. Some of the speakers include Eric von Hippel, T Wilson Professor of Innovation Management and also Professor of Engineering Systems at MIT, Shaida Badiee, Director of the Development Data Group at the World Bank and champion for the Open Data Initiative, Micah Altman, Director of Research and Head of the Program on Information Science for the MIT Libraries as well as Philip E. Bourne, Professor at the University of California San Diego and Associate Director of the RCSB Protein Data Bank.

    The workshop will address topics including:

    • Research data sharing: how and where to share economics social science research data, enforce data management plans, promote better data management and data use
    • Open and collaborative research: how to create incentives for economists and social scientists to share their research data and methods openly with the academic community
    • Transparent economics: how to achieve greater involvement of the public in the research agenda of economics and social science

    The knowledge sharing in economics session will invite a discussion between Joshua Gans, Jeffrey S. Skoll Chair of Technical Innovation and Entrepreneurship at the Rotman School of Management at the University of Toronto and Co-Director of the Research Program on the Economics of Knowledge Contribution and Distribution, John Rust, Professor of Economics at Georgetown University and co-founder of EconJobMarket.org, Gert Wagner, Professor of Economics at the Berlin University of Technology (TUB) and Chairman of the German Census Commission and German Council for Social and Economic Data as well as Daniel Feenberg, Research Associate in the Public Economics program and Director of Information Technology at the National Bureau of Economic Research.

    The session on research data sharing will be chaired by Thomas Bourke, Economics Librarian at the European University Institute, and will discuss the efficient sharing of data and how to create and enforce reward structures for researchers who produce and share high quality data, gathering experts from the field including Mercè Crosas, Director of Data Science at the Institute for Quantitative Social Science (IQSS) at Harvard University, Amy Pienta, Acquisitions Director at the Inter-university Consortium for Political and Social Research (ICPSR), Joan Starr, Chair of the Metadata Working Group of DataCite as well as Brian Hole, the founder of the open access academic publisher Ubiquity Press.

    Benjamin Mako Hill, researcher and PhD Candidate at the MIT and Berkman Center for Internet and Society at Harvard Univeresity, will chair the session on the evolving evidence base of social science, which will highlight examples of how economists can broaden their perspective on collecting and using data through different means: through mobile data collection, through the web or through crowd-sourcing and also consider how to engage the broader community and do more transparent economic research and decision-making. Speakers include Amparo Ballivian, Lead Economist working with the Development Data Group of the World Bank, Michael P. McDonald, Associate Professor at George Mason University and co-principle investigator on the Public Mapping Project and Pablo de Pedraza, Professor at the University of Salamanca and Chair of Webdatanet.

    The morning session on June 12 will gather different stakeholders to discuss how to share responsibility and how to pursue joint action. It will be chaired by Mireille van Eechoud, Professor of Information Law at IViR and will include short statements by Daniel Goroff, Vice President and Program Director at the Alfred P. Sloan Foundation, Nikos Askitas, Head of Data and Technology at the Institute for the Study of Labor (IZA), Carson Christiano, Head of CEGA’s partnership development efforts and coordinating the Berkeley Initiative for Transparency in the Social Sciences (BITSS) and Jean Roth, the Data Specialist at the National Bureau of Economic Research.

    At the end of the workshop the Working Group will discuss the future plans of the project and gather feedback on possible initiatives for translating discussions in concrete action plans. Slides and audio will be available on the website after the workshop. If you have any questions please contact economics [at] okfn.org

    Metametrik Sprint in London, May 25

    - May 2, 2013 in Announcements, Call for participation, Events, Featured, Metametrik, Sprint

    The Open Economics Working Group is inviting to a one-day sprint to create a machine-readable format for the reporting of regression results.

    • When: May 25, Saturday, 10:00-16:00
    • Where: Centre for Creative Collaboration (tbc), 16 Acton Street, London, WC1X 9NG
    • How to participate: please, write to economics [at] okfn.org

    The event is meant for graduate students in economics and quantitative social science as well as other scientists and researchers who are working with quantitative data analysis and regressions. We would also welcome developers with some knowledge in XML and other mark-up programming and others interested to contribute to this project.

    About Metametrik

    Metametrik, as a machine readable format and platform to store econometric results, will offer a universal form for presenting empirical results. Furthermore, the resulting database would present new opportunities for data visualisation and “meta-regressions”, i.e. statistical analysis of all empirical contributions in a certain area.

    During the sprint we will create a prototype of a format for saving regression results of empirical economics papers, which would be the basis of meta analysis of relationships in economics. The Metametrik format would include:

    • XML (or another markup language) derived format to describe regression output, capturing what dependent and independent variables were used, type of dataset (e.g. time series, panel), sign and magnitude of the relationship (coefficient and t-statistic), data sources, type of regression (e.g. OLS, 2SLS, structural equations), etc.
    • a database to store the results (possible integration with CKAN) – a user interface to allow for entry of results to be translated and saved in the Metametrik format. Results could be also imported directly from statistical packages
    • Visualisation of results and GUI – enabling queries from the database and displaying basic statistics about the relationships.

    Background

    Since computing power and data storage have become cheaper and more easily available, the number of empirical papers in economics has increased dramatically. Despite the large numbers of empirical papers, however, there is still no unified and machine readable standard for saving regression results. Researchers are often faced with a large volume of empirical papers, which describe regression results in similar yet differentiated ways.

    Like bibliographic machine readable formats (e.g. bibtex), the new standard would facilitate the dissemination and organization of existing results. Ideally, this project would offer an open storage where researchers can submit their regression results (for example in an XML type format). The standard could also be implemented in a wide range of open source econometric packages and projects like R or RePec.

    From a practical perspective, this project would greatly help to organize the large pile of existing regressions and facilitate literature reviews: If someone is interested in the relationship between democracy and economic development, for example, s/he need not go through the large pile of current papers but can simply look up the relationship on the open storage: The storage will then produce a list of existing results, along with intuitive visualizations (what % of results are positive/negative, how do the results evolve over time/i.e. is there a convergence in results). From an academic perspective, the project would also facilitate the compilation of meta-regressions that have become increasingly popular. Metametrik will be released under an open license.

    If you have further questions, please contact us at economics [at] okfn.org

    Automated Game Play Datasets: New Releases

    - April 24, 2013 in Announcements, Data Release, Featured, Open Data, Open Economics, Open Research

    Last month we released ten datasets from the research project “Small Artificial Human Agents for Virtual Economies“, implemented by Professor David Levine and Professor Yixin Chen at the Washington University of St. Louis and funded by the National Science Foundation [See dedicated webpage].

    We are now happy to announce that the list has grown with seven more datasets, now hosted at datahub.io which were added this month, including:


    Clark, K. & Sefton, M., 2001. Repetition and signalling: experimental evidence from games with efficient equilibria. Economics Letters, 70(3), pp.357–362.

    Link to publication | Link to data
    Costa-Gomes, M. and Crawford, V. 2006. “Cognition and Behavior in Two-Person guessing Games: An Experimental Study.” The American Economic Review, 96(5), pp.1737-1768

    Link to publication | Link to data
    Costa-Gomes, M., Crawford, V. and Bruno Broseta. 2001. “Cognition and Behavior in Normal-Form Games: An Experimental Study.” Econometrica, 69(5), pp.1193-1235

    Link to publication | Link to data
    Crawford, V., Gneezy, U. and Yuval Rottenstreich. 2008. “The Power of Focal Points is Limited: Even Minute Payoff Asymmetry May Yield Large Coordination Failures.” The American Economic Review, 98(4), pp.1443-1458

    Link to publication | Link to data
    Feltovich, N., Iwasaki, A. and Oda, S., 2012. Payoff levels, loss avoidance, and equilibrium selection in games with multiple equilibria: an experimental study. Economic Inquiry, 50(4), pp.932-952.

    Link to publication | Link to data
    Feltovich, N., & Oda, S., 2013. The effect of matching mechanism on learning in games played under limited information, Working paper

    Link to publication | Link to data
    Schmidt D., Shupp R., Walker J.M., and Ostrom E. 2003. “Playing Safe in Coordination Games: The Roles of Risk Dominance, Payoff Dominance, and History of Play.” Games and Economic Behaviour, 42(2), pp.281–299.

    Link to publication | Link to data

    Any questions or comments? Please get in touch: economics [at] okfn.org

    Open Research Data Handbook – Call for Case Studies

    - April 9, 2013 in Announcements, Call for participation, Featured, Open Data, Open Economics, Open Research

    The OKF Open Research Data Handbook – a collaborative and volunteer-led guide to Open Research Data practices – is beginning to take shape and we need you! We’re looking for case studies showing benefits from open research data: either researchers who have personal stories to share or people with relevant expertise willing to write short sections.

    Designed to provide an introduction to open research data, we’re looking to develop a resource that will explain what open research data actually is, the benefits of opening up research data, as well as the processes and tools which researchers need to do so, giving examples from different academic disciplines.

    Leading on from a couple of sprints, a few of us are in the process of collating the first few chapters, and we’ll be asking for comment on these soon.

    In the meantime, please provide us with case studies to include, or let us know if you are willing to contribute areas of expertise to this handbook.

    i want you

    We now need your help to gather concrete case studies which detail your experiences of working with Open Research Data. Specifically, we are looking for:

    • Stories of the benefits you have seen as a result of open research data practices
    • Challenges you have faced in open research, and how you overcame them
    • Case studies of tools you have used to share research data or to make it openly available
    • Examples of how failing to follow open research practices has hindered the progress of science, economics, social science, etc.
    • … More ideas from you!

    Case studies should be around 200-500 words long. They should be concrete, based on real experiences, and should focus on one specific angle of open research data (you can submit more than one study!).

    Please fill out the following form in order to submit a case study:

    Link to form

    If you have any questions, please contact us on researchhandbook [at] okfn.org

    Releasing the Automated Game Play Datasets

    - March 7, 2013 in Announcements, Data Release, Featured, Open Data, Open Economics, Open Research, Open Tools

    We are very happy to announce that the Open Economics Working Group is releasing the datasets of the research project “Small Artificial Human Agents for Virtual Economies“, implemented by Professor David Levine and Professor Yixin Chen at the Washington University of St. Louis and funded by the National Science Foundation [See dedicated webpage].

    The authors who have participated in the study have given their permission to publish their data online. We hope that through making this data available online we will aid researchers working in this field. This initiative is motivated by our belief that in order for economic research to be reliable and trusted, it should be possible to reproduce research findings – which is difficult or even impossible without the availability of the data and code. Making material openly available reduces to a minimum the barriers for doing reproducible research.

    If you are interested to know more or you like to get help in releasing research data in your field, please contact us at: economics [at] okfn.org

    List of Datasets and Code

    Andreoni, J. & Miller, J.H., 1993. Rational cooperation in the finitely repeated prisoner’s dilemma: Experimental evidence. The Economic Journal, pp.570–585.

    Link to publication | Link to data
    Bó, P.D., 2005. Cooperation under the shadow of the future: experimental evidence from infinitely repeated games. The American Economic Review, 95(5), pp.1591–1604.


    Link to publication
    | Link to data

    Charness, G., Frechette, G.R. & Qin, C.-Z., 2007. Endogenous transfers in the Prisoner’s Dilemma game: An experimental test of cooperation and coordination. Games and Economic Behavior, 60(2), pp.287–306.

    Link to publication | Link to data
    Clark, K., Kay, S. & Sefton, M., 2001. When are Nash equilibria self-enforcing? An experimental analysis. International Journal of Game Theory, 29(4), pp.495–515.

    Link to publication | Link to data
    Duffy, John and Feltovich, N., 2002. Do Actions Speak Louder Than Words? An Experimental Comparison of Observation and Cheap Talk. Games and Economic Behavior, 39(1), pp.1–27.

    Link to publication | Link to data
    Duffy, J. & Ochs, J., 2009. Cooperative behavior and the frequency of social interaction. Games and Economic Behavior, 66(2), pp.785–812.

    Link to publication
    | Link to data
    Knez, M. & Camerer, C., 2000. Increasing cooperation in prisoner’s dilemmas by establishing a precedent of efficiency in coordination games. Organizational Behavior and Human Decision Processes, 82(2), pp.194–216.

    Link to publication
    | Link to data
    Ochs, J., 1995. Games with unique, mixed strategy equilibria: An experimental study. Games and Economic Behavior, 10(1), pp.202–217.

    Link to publication | Link to data
    Ong, D. & Chen, Z., 2012. Tiger Women: An All-Pay Auction Experiment on Gender Signaling of Desire to Win. Available at SSRN 1976782.

    Link to publication | Link to data
    Vlaev, I. & Chater, N., 2006. Game relativity: How context influences strategic decision making. Journal of Experimental Psychology: Learning, Memory, and Cognition; Journal of Experimental Psychology: Learning, Memory, and Cognition, 32(1), p.131.

    Link to publication | Link to data

    Project Background

    An important need for developing better economic policy prescriptions is an improved method of validating theories. Originally economics depended on field data from surveys and laboratory experiments. An alternative method of validating theories is through the use of artificial or virtual economies. If a virtual world is an adequate description of a real economy, then a good economic theory ought to be able to predict outcomes in that setting. An artificial environment offers enormous advantages over the field and laboratory: complete control – for example, over risk aversion and social preferences – and great speed in creating economies and validating theories. In economics the use of virtual economies can potentially enable us to deal with heterogeneity, with small frictions, and with expectations that are backward looking rather than determined in equilibrium. These are difficult or impractical to combine in existing calibrations or Monte Carlo simulations.

    The goal of this project is to build artificial agents by developing computer programs that act like human beings in the laboratory. We focus on the simplest type of problem of interest to economists: the simple one-shot two-player simultaneous move games. There is a wide variety of existing published data on laboratory behavior that will be our primary testing ground for our computer programs. As we achieve greater success with this we want to see if our programs can adapt themselves to changes in the rules: for example, if payments are changed in a certain way, the computer programs will play differently: do people do the same? In some cases we may be able to answer these questions with data from existing studies; in others we will need to conduct our own experimental studies.

    There is a great deal of existing research relevant to the current project. The state of the art in the study of virtual economies is agent-based modeling (Bonabeau (2002)). In addition, crucially related are both the theoretical literature on learning in games, and the empirical literature on behavior in the experimental laboratory. From the perspective of theory, the most relevant economic research is Foster and Vohra’s (1999) work on calibrated play and the related work on smooth fictitious play (Fudenberg and Levine (1998)) and regret algorithms (Hart and Mas-Colell (2000)). There is also a relevant literature in the computational game theory literature on regret optimization such as Nisan et al. (2007). Empirical work on human play in the laboratory has two basic threads: the research on first time play such as Nagel (1995) and the hierarchical models of Stahl and Wilson (1994), Costa-Gomes, Crawford, and Broseta (2001) and Camerer, Ho, and Chong (2004). Second are the learning models, most notably the reinforcement learning model of Erev and Roth (1998) and the EWA model (Ho, Camerer, and Chong (2007)). This latter model can be considered state of the art, including as it does both reinforcement and fictitious play type learning and initial play from a cognitive hierarchy.

    Sovereign Credit Risk: An Open Database

    - January 31, 2013 in Data Release, External Projects, Featured, Open Data, Open Economics, Open Research, Public Finance and Government Data, Public Sector Credit

    Throughout the Eurozone, credit rating agencies have been under attack for their lack of transparency and for their pro-cyclical sovereign rating actions. In the humble belief that the crowd can outperform the credit rating oracles, we are introducing an open database of historical sovereign risk data. It is available at http://sovdefdata.appspot.com/ where community members can both view and edit the data. Once the quality of this data is sufficient, the data set can be used to create unbiased, transparent models of sovereign credit risk.

    The database contains central government revenue, expenditure, public debt and interest costs from the 19th century through 2011 – along with crisis indicators taken from Reinhart and Rogoff’s public database.

    CentralGovernmentInterestToRevenue2010

    Why This Database?

    Prior to the appearance of This Time is Different, discussions of sovereign credit more often revolved around political and trade-related factors. Reinhart and Rogoff have more appropriately focused the discussion on debt sustainability. As with individual and corporate debt, government debt becomes more risky as a government’s debt burden increases. While intuitively obvious, this truth too often gets lost among the multitude of criteria listed by rating agencies and within the politically charged fiscal policy debate.

    In addition to emphasizing the importance of debt sustainability, Reinhart and Rogoff showed the virtues of considering a longer history of sovereign debt crises. As they state in their preface:

    “Above all, our emphasis is on looking at long spans of history to catch sight of ’rare’ events that are all too often forgotten, although they turn out to be far more common and similar than people seem to think. Indeed, analysts, policy makers, and even academic economists have an unfortunate tendency to view recent experience through the narrow window opened by standard data sets, typically based on a narrow range of experience in terms of countries and time periods. A large fraction of the academic and policy literature on debt and default draws conclusions on data collected since 1980, in no small part because such data are the most readily accessible. This approach would be fine except for the fact that financial crises have much longer cycles, and a data set that covers twenty-five years simply cannot give one an adequate perspective…”

    Reinhart and Rogoff greatly advanced what had been an innumerate conversation about public debt, by compiling, analyzing and promulgating a database containing a long time series of sovereign data. Their metric for analyzing debt sustainability – the ratio of general government debt to GDP – has now become a central focus of analysis.

    We see this as a mixed blessing. While the general government debt to GDP ratio properly relates sovereign debt to the ability of the underlying economy to support it, the metric has three important limitations.

    First, the use of a general government indicator can be misleading. General government debt refers to the aggregate borrowing of the sovereign and the country’s state, provincial and local governments. If a highly indebted local government – like Jefferson County, Alabama, USA – can default without being bailed out by the central government, it is hard to see why that local issuer’s debt should be included in the numerator of a sovereign risk metric. A counter to this argument is that the United States is almost unique in that it doesn’t guarantee sub-sovereign debts. But, clearly neither the rating agencies nor the market believe that these guarantees are ironclad: otherwise all sub-sovereign debt would carry the sovereign rating and there would be no spread between sovereign and sub-sovereign bonds – other than perhaps a small differential to accommodate liquidity concerns and transaction costs.

    Second, governments vary in their ability to harvest tax revenue from their economic base. For example, the Greek and US governments are less capable of realizing revenue from a given amount of economic activity than a Scandinavian sovereign. Widespread tax evasion (as in Greece) or political barriers to tax increases (as in the US) can limit a government’s ability to raise revenue. Thus, government revenue may be a better metric than GDP for gauging a sovereign’s ability to service its debt.

    Finally, the stock of debt is not the best measure of its burden. Countries that face comparatively low interest rates can sustain higher levels of debt. For example, The United Kingdom avoided default despite a debt/GDP ratio of roughly 250% at the end of World War II. The amount of interest a sovereign must pay on its debt each year may thus be a better indicator of debt burden.

    Our new database attempts to address these concerns by layering central government revenue, expenditure and interest data on top of the statistics Reinhart and Rogoff previously published.

    A Public Resource Requiring Public Input

    Unlike many financial data sets, this compilation is being offered free of charge and without a registration requirement. It is offered in the hope that it, too, will advance our understanding of sovereign credit risk.

    The database contains a large number of data points and we have made efforts to quality control the information. That said, there are substantial gaps, inconsistencies and inaccuracies in the data we are publishing.

    Our goal in releasing the database is to encourage a mass collaboration process directed at enhancing the information. Just as Wikipedia articles asymptotically approach perfection through participation by the crowd, we hope that this database can be cleansed by its user community. There are tens of thousands of economists, historians, fiscal researchers and concerned citizens around the world that are capable of improving this data, and we hope that they will find us.

    To encourage participation, we have added Wiki-style capabilities to the user interface. Users who wish to make changes can log in with an OpenID and edit individual data points. They can also enter comments to explain their changes. User changes are stored in an audit trail, which moderators will periodically review – accepting only those that can be verified while rolling back others.

    This design leverages the trigger functionality of MySQL to build a database audit trail that moderators can view and edit. We have thus married the collaborative strengths of a Wiki to the structure of a relational database. Maintaining a consistent structure is crucial for a dataset like this because it must ultimately be analyzed by a statistical tool such as R.

    The unique approach to editing database fields Wiki-style was developed by my colleague, Vadim Ivlev. Vadim will contribute the underlying Python, JavaScript and MySQL code to a public GitHub repository in a few days.

    Implications for Sovereign Ratings

    Once the dataset reaches an acceptable quality level, it can be used to support logit or probit analysis of sovereign defaults. Our belief – based on case study evidence at the sovereign level and statistical modeling of US sub-sovereigns – is that the ratio of interest expense to revenue and annual revenue change are statistically significant predictors of default. We await confirmation or refutation of this thesis from the data set. If statistically significant indicators are found, it will be possible to build a predictive model of sovereign default that could be hosted by our partners at Wikirating. The result, we hope, will be a credible, transparent and collaborative alternative to the credit ratings status quo.

    Sources and Acknowledgements

    Aside from the data set provided by Reinhart and Rogoff, we also relied heavily upon the Center for Financial Stability’s Historical Financial Statistics. The goal of HFS is “to be a source of comprehensive, authoritative, easy-to-use macroeconomic data stretching back several centuries.” This ambitious effort includes data on exchange rates, prices, interest rates, national income accounts and population in addition to government finance statistics. Kurt Schuler, the project leader for HFS, generously offered numerous suggestions about data sources as well as connections to other researchers who gave us advice.

    Other key international data sources used in compiling the database were:

    • International Monetary Fund’s Government Finance Statistics
    • Eurostat
    • UN Statistical Yearbook
    • League of Nation’s Statistical Yearbook
    • B. R. Mitchell’s International Historical Statistics, Various Editions, London: Palgrave Macmillan.
    • Almanach de Gotha
    • The Statesman’s Year Book
    • Corporation of Foreign Bondholders Annual Reports
    • Statistical Abstract for the Principal and Other Foreign Countries
    • For several countries, we were able to obtain nation-specific time series from finance ministry or national statistical service websites.

    We would also like to thank Dr. John Gerring of Boston University and Co-Director of the CLIO World Tables project, for sharing data and providing further leads as well as Dr. Joshua Greene, author of Public Finance: An International Perspective, for alerting us to the IMF Library in Washington, DC.

    A number of researchers and developers played valuable roles in compiling the data and placing it on line. We would especially like to thank Charles Tian, T. Wayne Pugh, Amir Muhammed, Anshul Gupta and Vadim Ivlev, as well as Karthick Palaniappan and his colleagues at H-Garb Informatix in Chennai, India for their contributions.

    Finally, we would like to thank the National University of Singapore’s Risk Management Institute for the generous grant that made this work possible.

    Launching the Open Sustainability Working Group

    - November 30, 2012 in Announcements, Call for participation, Environment, Energy and Sustainability, Featured, Open Data, Open Research

    This blog post is written by Jorge Zapico, researcher at the Center for Sustainable Communications at KTH The Royal Institute of Technology and Velichka Dimitrova, Project Coordinator for Economics and Energy at the Open Knowledge Foundation and is cross-posted from the main blog.

    Sign up to Open Sustainability

    Sustainability is one of the most important challenges of our time. We are facing global environmental crises, such as climate change, resource depletion, deforestation, overfishing, eutrophication, loss of biodiversity, soil degradation, environmental pollution, etc. We need to move towards a more sustainable and resilient society, that ensures the well-being of current and future generations, that allows us to progress while stewarding the finite resources and the ecosystems we depend on.

    Data is needed to monitor the condition of the environment and to measure how we are performing and progressing (or not) towards sustainability. Transparency and feedback is key for good decision-making, for allowing accountability and for tracking and tuning performance. This is true both at an institutional level, such as working with national climate change goals; at a company level, such as deciding the materials for building a product; and at a personal level, deciding between chicken and salmon at the supermarket. However, most of the environmental information is closed, outdated, static, or/and in text documents that are not possible to process.

    For instance, unlike gross domestic product (GDP) and other publicly available data, carbon dioxide emissions data is not published frequently and in disaggregated form. While the current international climate negotiations at Doha discuss joint global efforts for the reduction of greenhouse gas emission, climate data is not freely and widely available.

    “Demand CO2 data!” urged Hans Rosling at the Open Knowledge Festival in Helsinki last September#, encouraging a data-driven discussion of energy and resources. “We can have climate change beyond our expectations, which we haven’t done anything in time for” said Rosling in outlining the biggest challenges of our time. Activists don’t even demand the data. Many countries, such as Sweden, show up for climate negotiations without having done their CO2 emissions reporting for many months. Our countries should report on climate data in order for us to see the big picture.

    Sustainability data should be open and freely available so anyone is free to use, reuse, and redistribute it. This data should be easy to access, both usable for the public but also accessible in standard machine-readable formats for enabling reuse and remix. And by sustainability data we do not mean only CO2 information, but all data that is necessary for measuring the state of, and changes in, the environment, and data which supports progress towards sustainability. This include a diversity of things like: scientific climate data and temperature records, environmental impact assessment of products and services, emissions and pollution information from companies and governments, energy production data or ecosystem health indicators.

    To move towards this goal, we are founding a new Working Group on Open Sustainability, which seeks to:

    • advocate and promote the opening up of sustainability information and datasets
    • collect sustainability information and maintain a knowledge base of datasets
    • act as a support environment / hub for the development of community-driven projects
    • provide a neutral platform for working towards standards and harmonization of open sustainability data between different groups and projects.

    The Open Sustainability Working Group is open for anyone to join. We hope to form an interdisciplinary network from a range of backgrounds such as academics, business people, civil servants, technologists, campaigners, consultants and those from NGOs and international institutions. Relevant areas of expertise include sustainability, industrial ecology, climate and environmental science, cleanweb development, ecological economics, social science, sustainability, energy, open data and transparency. Join the Open Sustainability Working Group by signing up to the mailing list to share your ideas and to contribute.

    Creating a more sustainable society and mitigating climate change are some of the very hardest challenges we face. It will require us to collaborate, to create new knowledge together and new ways of doing things. We need open data about the state of the planet, we need transparency about emissions and the impact of products and industries, we need feedback and we need accountability. We want to leverage all the ideas, technologies and energy we can to prevent catastrophic environmental change.

    This initiative was started by the OKFestival Open Knowledge and Sustainability and Green Hackathon team including Jorge Zapico, Hannes Ebner (The Centre for Sustainable Communications at KTH), James Smith (Cleanweb UK), Chris Adams (AMEE), Jack Townsend (Southampton University) and Velichka Dimitrova (Open Knowledge Foundation).