Support Us

You are browsing the archive for open-economics.

Dutch PhD-workshop on research design, open access and open data

- February 1, 2013 in Economic Publishing, EDaWaX, External Projects, Featured, Open Access, Open Data, Open Economics

This blog post is written by Esther Hoorn, Copyright Librarian, University of Groningen, the Netherlands.

If Roald Dahl were still alive, he would certainly be tempted to write a book about the Dutch social psychologist Diederik Stapel. For not only did he make up the research data to support his conclusions, but also he ate all the M&M’s, which he bought with public money for interviews with fictitious pupils in fictitious high schools. In the Netherlands the research fraud by Stapel was a catalyst to bring attention to the issue of research integrity and availability of research data. A new generation of researchers needs to be aware of the policy on sharing research data by the Dutch research funder NWO, the EU policy and the services of DANS, the Dutch Data archiving and networked services. In the near future, a data management plan will be required in every research proposal.

Verifiability

For some time now the library at the University of Groningen is organizing workshops for PhDs to raise awareness on the shift towards Open Access. Open Access and copyright are the main themes. The question also to address verifiability of research data came from SOM, the Research Institute of the Faculty of Economics and Business. The workshop is given as part of the course Research Design of the PhD program. The blogpost Research data management in economic journals proved to be very useful to get an overview of the related issues in this field.

Open Access

As we often see, Open Access was a new issue to most of the students. Because the library buys licenses the students don’t perceive a problem with access to research journals. Moreover, they are not aware of the big sums that the universities at present pay to finance access exclusively for their own staff and students. Once they understand the issue there is a strong interest. Some see a parallel with innovative distribution models for music. The PhDs come from all over the world. And more and more Open Access is addressed in every country of the world. One PhD from Indonesia mentioned that the Indonesian government requires his dissertation to be available through the national Open Access repository. Chinese students were surprised by availability of information on Open Access in China.

Assignment

The students prepared an assignment with some questions on Open Access and sharing research data. The first question still is on the impact factor of the journals in which they intend to publish. The questions brought the discussion to article level metrics and alternative ways to organize the peer review of Open Access journals.

Will availability of research data stimulate open access?

Example of the Open Access journal Economics

The blogpost Research data management in economic journals presents the results of the German project EdaWax, European Data Watch Extended. An important result of the survey points at the role of association and university presses. Especially it appears that many journals followed the data availability policy of the American Economic Association.

[quote] We found out that mainly university or association presses have high to very high percentages of journals owning data availability policies while the major scientific publishers stayed below 20%.

Out of the 29 journals with data availability policies, 10 used initially the data availability policy implemented by the American Economic Review (AER). These journals either used exactly the same policy or a slightly modified version.

For students it is assuring to see how associations take up their role to address this issue. An example of an Open Access journal that adopted the AER policy is Economics. And yes, this journal does have an impact factor in the Social Science Citation Index and also the possibility to archive the datasets in the Dataverse Network.

Re-use of research data for peer review

One of the students suggested that the public availability of research data (instead or merely research findings) may lead to innovative forms of review. This may facilitate a further shift towards Open Access. With access to underlying research data and methodologies used, scientists may be in a better position to evaluate the quality of the research conducted by peers. The typical quality label given by top and very good journals may then become less relevant, over time.
It was also discussed that journals may not publish a certain numbers of papers in a volume released e.g. four times a year, but rather as qualifying papers are available for publication throughout the year. Another point raised was that a substantial change in the existing publication mechanics will likely require either top journals or top business schools to lead the way, whereas associations of leading scientists in a certain field may also play an important role in such conversion.

First Open Economics International Workshop Recap

- January 25, 2013 in Economic Publishing, Events, Featured, Open Access, Open Data, Open Economics, Open Research, Open Tools, Workshop

The first Open Economics International Workshop gathered 40 academic economists, data publishers and funders of economics research, researchers and practitioners to a two-day event at Emmanuel College in Cambridge, UK. The aim of the workshop was to build an understanding around the value of open data and open tools for the Economics profession and the obstacles to opening up information, as well as the role of greater openness of the academy. This event was organised by the Open Knowledge Foundation and the Centre for Intellectual Property and Information Law and was supported by the Alfred P. Sloan Foundation. Audio and slides are available at the event’s webpage.

Open Economics Workshop

Setting the Scene

The Setting the Scene session was about giving a bit of context to “Open Economics” in the knowledge society, seeing also examples from outside of the discipline and discussing reproducible research. Rufus Pollock (Open Knowledge Foundation) emphasised that there is necessary change and substantial potential for economics: 1) open “core” economic data outside the academy, 2) open as default for data in the academy, 3) a real growth in citizen economics and outside participation. Daniel Goroff (Alfred P. Sloan Foundation) drew attention to the work of the Alfred P. Sloan Foundation in emphasising the importance of knowledge and its use for making decisions and data and knowledge as a non-rival, non-excludable public good. Tim Hubbard (Wellcome Trust Sanger Institute) spoke about the potential of large-scale data collection around individuals for improving healthcare and how centralised global repositories work in the field of bioinformatics. Victoria Stodden (Columbia University / RunMyCode) stressed the importance of reproducibility for economic research and as an essential part of scientific methodology and presented the RunMyCode project.

Open Data in Economics

The Open Data in Economics session was chaired by Christian Zimmermann (Federal Reserve Bank of St. Louis / RePEc) and was about several projects and ideas from various institutions. The session examined examples of open data in Economics and sought to discover whether these examples are sustainable and can be implemented in other contexts: whether the right incentives exist. Paul David (Stanford University / SIEPR) characterised the open science system as a system which is better than any other in the rapid accumulation of reliable knowledge, whereas the proprietary systems are very good in extracting the rent from the existing knowledge. A balance between these two systems should be established so that they can work within the same organisational system since separately they are distinctly suboptimal. Johannes Kiess (World Bank) underlined that having the data available is often not enough: “It is really important to teach people how to understand these datasets: data journalists, NGOs, citizens, coders, etc.”. The World Bank has implemented projects to incentivise the use of the data and is helping countries to open up their data. For economists, he mentioned, having a valuable dataset to publish on is an important asset, there are therefore not sufficient incentives for sharing.

Eustáquio J. Reis (Institute of Applied Economic Research – Ipea) related his experience on establishing the Ipea statistical database and other projects for historical data series and data digitalisation in Brazil. He shared that the culture of the economics community is not a culture of collaboration where people willingly share or support and encourage data curation. Sven Vlaeminck (ZBW – Leibniz Information Centre for Economics) spoke about the EDaWaX project which conducted a study of the data-availability of economics journals and will establish publication-related data archive for an economics journal in Germany.

Legal, Cultural and other Barriers to Information Sharing in Economics

The session presented different impediments to the disclosure of data in economics from the perspective of two lawyers and two economists. Lionel Bently (University of Cambridge / CIPIL) drew attention to the fact that there is a whole range of different legal mechanism which operate to restrict the dissemination of information, yet on the other hand there is also a range of mechanism which help to make information available. Lionel questioned whether the open data standard would be always the optimal way to produce high quality economic research or whether there is also a place for modulated/intermediate positions where data is available only on conditions, or only in certain part or for certain forms of use. Mireille van Eechoud (Institute for Information Law) described the EU Public Sector Information Directive – the most generic document related to open government data and progress made for opening up information published by the government. Mireille also pointed out that legal norms have only limited value if you don’t have the internalised, cultural attitudes and structures in place that really make more access to information work.

David Newbery (University of Cambridge) presented an example from the electricity markets and insisted that for a good supply of data, informed demand is needed, coming from regulators who are charged to monitor markets, detect abuse, uphold fair competition and defend consumers. John Rust (Georgetown University) said that the government is an important provider of data which is otherwise too costly to collect, yet a number of issues exist including confidentiality, excessive bureaucratic caution and the public finance crisis. There are a lot of opportunities for research also in the private sector where some part of the data can be made available (redacting confidential information) and the public non-profit sector also can have a tremendous role as force to organise markets for the better, set standards and focus of targeted domains.

Current Data Deposits and Releases – Mandating Open Data?

The session was chaired by Daniel Goroff (Alfred P. Sloan Foundation) and brought together funders and publishers to discuss their role in requiring data from economic research to be publicly available and the importance of dissemination for publishing.

Albert Bravo-Biosca (NESTA) emphasised that mandating open data begins much earlier in the process where funders can encourage the collection of particular data by the government which is the basis for research and can also act as an intermediary for the release of open data by the private sector. Open data is interesting but it is even more interesting when it is appropriately linked and combined with other data and the there is a value in examples and case studies for demonstrating benefits. There should be however caution as opening up some data might result in less data being collected.

Toby Green (OECD Publishing) made a point of the different between posting and publishing, where making content available does not always mean that it would be accessible, discoverable, usable and understandable. In his view, the challenge is to build up an audience by putting content where people would find it, which is very costly as proper dissemination is expensive. Nancy Lutz (National Science Foundation) explained the scope and workings of the NSF and the data management plans required from all economists who are applying for funding. Creating and maintaining data infrastructure and compliance with the data management policy might eventually mean that there would be less funding for other economic research.

Trends of Greater Participation and Growing Horizons in Economics

Chris Taggart (OpenCorporates) chaired the session which introduced different ways of participating and using data, different audiences and contributors. He stressed that data is being collected in new ways and by different communities, that access to data can be an enormous privilege and can generate data gravities with very unequal access and power to make use of and to generate more data and sometimes analysis is being done in new and unexpected ways and by unexpected contributors. Michael McDonald (George Mason University) related how the highly politicised process of drawing up district lines in the U.S. (also called Gerrymandering) could be done in a much more transparent way through an open-source re-districting process with meaningful participation allowing for an open conversation about public policy. Michael also underlined the importance of common data formats and told a cautionary tale about a group of academics misusing open data with a political agenda to encourage a storyline that a candidate would win a particular state.

Hans-Peter Brunner (Asian Development Bank) shared a vision about how open data and open analysis can aid in decision-making about investments in infrastructure, connectivity and policy. Simulated models about investments can demonstrate different scenarios according to investment priorities and crowd-sourced ideas. Hans-Peter asked for feedback and input on how to make data and code available. Perry Walker (new economics foundation) spoke about the conversation and that a good conversation has to be designed as it usually doesn’t happen by accident. Rufus Pollock (Open Knowledge Foundation) concluded with examples about citizen economics and the growth of contributions from the wider public, particularly through volunteering computing and volunteer thinking as a way of getting engaged in research.

During two sessions, the workshop participants also worked on Statement on the Open Economics principles will be revised with further input from the community and will be made public on the second Open Economics workshop taking place on 11-12 June in Cambridge, MA.

First Open Economics International Workshop

- December 17, 2012 in Events, Featured, Open Access, Open Data, Open Economics, Open Research, Workshop

**You can follow all the goings-on today and tomorrow through the [live stream](http://bambuser.com/v/3232222).**

On 17-18 December, economics and law professors, data publishers, practitioners and representatives from international institutions will gather at Emmanuel College, Cambridge for the First Open Economics International Workshop. From showcasing the examples of successes in collaborative economic research and open data to reviewing the legal cultural and other barriers to information sharing this event aims to build an understanding of the value of open data and open tools for the Economics profession and the obstacles to opening up information in Economics. The workshop will also explore the role of greater openness in broadening understanding of and engagement with Economics among the wider community including policy-makers and society.

This event is part of the Open Economics project, funded by the Alfred P. Sloan Foundation and is a key step in identifying best practice as well as legal, regulatory and technical barriers and opportunities for open economic data. A statement on the Open Economics Principles will be produced as a result of the workshop.

Introduction:
Setting the Scene – General perspectives
Rufus Pollock, Open Knowledge Foundation; Daniel L. Goroff, Alfred P. Sloan Foundation, Tim Hubbard, Wellcome Trust Sanger Institute, Victoria Stodden, Columbia Institute / RunMyCode.org
Videostream: Here
Session: “Open Data in Economics – Reasons, Examples, Potential”:
Examples of open data in economics so far and its potential benefits
Session host: Christian Zimmermann, (Federal Reserve Bank of St. Louis, RePEc), Panelists: Paul David (Stanford University, SIEPR), Eustáquio J. Reis (Institute of Applied Economic Research – Ipea), Johannes Kiess (World Bank), Sven Vlaeminck (ZBW – Leibniz Information Centre for Economics).
Videostream: Part 1 and Part 2
Session: “Legal, Cultural and other Barriers to Information Sharing in Economics” : Introduction and overview of challenges faced in information sharing in Economics
Session host: Lionel Bently, (University of Cambridge / CIPIL), Panelists: Mireille van Eechoud, (Institute for Information Law), David Newbery, (University of Cambridge), John Rust, (Georgetown University).
Session: “Current Data Deposit and Releases – Mandating Open Data?”: Round table discussion with stakeholders: Representatives of funders, academic publishing and academics.
Session host: Daniel L. Goroff, (Alfred P. Sloan Foundation), Panelists: Albert Bravo-Biosca, (NESTA), Toby Green, (OECD Publishing), Nancy Lutz, (National Science Foundation).
Session: Trends of Greater Participation and Growing Horizons in Economics: Opening up research and the academy to wider engagement and understanding with the general public, policy-makers and others.
Session host: Chris Taggart, (OpenCorporates), Panelists: Michael P. McDonald, (George Mason University), Hans-Peter Brunner, (Asian Development Bank), Perry Walker, (New Economics Foundation)

The workshop is a designed to be a small invite-only event with a round-table format allowing participants to to share and develop ideas together. For a complete description and a detailed programme visit the event website.

Can’t attend? Join the LIVESTREAM here


The event is being organized by the Centre for Intellectual Property and Information Law (CIPIL) at the University of Cambridge and Open Economics Working Group of the Open Knowledge Foundation and is funded by the Alfred P. Sloan Foundation. More information about the Working Group can be found online.

Interested in getting updates about this project and getting involved? Join the Open Economics mailing list:

The Benefits of Open Data (part II) – Impact on Economic Research

- October 21, 2012 in Open Economics

A couple of weeks ago, I wrote the first part of the three part series on Open Data in Economics. Drawing upon examples from top research that focused on how providing information and data can help increase the quality of public service provision, the article explored economic research on open data. In this second part, I would like to explore the impact of openness on economic research.

We live in a data-driven age

There used to be a time when data was costly: There was not much data around. Comparable GDP data, for example, has only been collected starting in the early mid 20th Century. Computing power was expensive and costly: Data and commands were stored on punch cards, and researchers only had limited hours to run their statistical analyses at the few computers available at hand.

Today, however, statistics and econometric analysis has arrived in every office: Open Data initiatives at the World Bank and governments have made it possible to download cross-country GDP and related data using a few mouse-clicks. The availability of open source statistical packages such as R allows virtually everyone to run quantitative analyses on their own laptops and computers. Consequently, the number of empirical papers have increased substantially. The left figure (taken from Espinosa et al. 2012) plots the number of econometric (statistical) outputs per article in a given year: Quantitative research has really taken off since the 1960s. Where researchers used datasets with a few dozens of observations, modern applied econometricians now often draw upon datasets boasting millions of detailed micro-level observations.

 Why we need open data and access

The main economic argument in favour of open data is gains from trade. These gains come in several dimensions: First, open data helps avoid redundancy. As a researcher, you may know there are often same basic procedures (such as cleaning datasets, merging datasets) that have been done thousands of times, by hundreds of different researchers. You may also have experienced the time wasted compiling a dataset someone else already put together, but was unwilling to share: Open data in these cases can save a lot of time, allowing you to build upon the work of others. By feeding your additions back to the ecosystem, you again ensure that others can build on your data work. Just like there is no need to re-invent the wheel several times, the sharing of data allows researchers to build on existing data work and devote valuable time to genuinely new research.

Second, open data ensures the most efficient allocation of scarce resources – in this case datasets. Again, as a researcher, you may know that academics often treat their datasets as private gold mines. Indeed, entire research careers are often built on possessing a unique dataset. This hoarding often results in valuable data lying around on a forgotten harddisk, not fully used and ultimately wasted. What’s worse, the researcher – even though owning a unique dataset – may not be the most skilled to make full use of the dataset, while someone else may possess the necessary skills but not the data. Only recently, I had the opportunity to talk to a group of renown economists who – over the past decades – have compiled an incredibly rich dataset. During the conversation, it was mentioned that they themselves may have only exploited 10% of the data – and were urgently looking for fresh PhDs and talented researchers to unlock the full potential of the their data. But when data is open, there is no need to search, and data can be allocated to the most skilled researcher.

Finally, and perhaps most importantly, open data – by increasing transparency – also fosters scientific rigour: When datasets and statistical procedures are made available to everyone, a curious undergraduate student may be able to replicate and possibly refute the results of a senior researcher. Indeed, journals are increasingly asking researchers to publish their datasets along with the paper. But while this is a great step forward, most journals still keep the actual publication closed, asking for horrendous subscription fees. For example, readers of my first post may have noticed that many of the research articles linked could not be downloaded without a subscription or university affiliation. Since dissemination, replication and falsification are key features of science, the role of both open data and open access become essential to knowledge generation.

But there are of course challenges ahead: For example, while a wider access to data and statistical tools is a good thing, the ease of running regressions with a few mouse-clicks also results in a lot of mindless data mining and nonsensical econometric outputs. Quality control, hence, is and remains important. There are and in some cases also should be some barriers to data sharing. In some cases, researchers have invested a substantial time of their lives to construct their datasets, in which case it is understandable why some are uncomfortable to share their “baby” with just anyone. In addition, releasing (even anonymized) micro-level data often raises concerns of privacy protection. These issues – and existing solutions – will be discussed in the next post.

Data Party: Tracking Europe’s Failed Banks

- October 18, 2012 in Data Party, Open Economics

nuklr.dave CC BY

This fall marked the five year anniversary of the collapse of UK-based Northern Rock in 2007. Since then an unknown number of European banks have collapsed under the weight over plummeting housing markets, financial mismanagement and other reasons. But how many European banks did actually crash during the crisis?

In the United States, the Federal Deposit Insurance Corporation keeps a neat Failed bank list, which has recorded 496 bank failures in the US since 2000.

Europe however, and for that matter the rest of the world, still lack similar or comparable data on how many banks actually failed since the beginning of the crisis. Nobody has collected data on how many Spanish cajas actually crashed and how many troubled German landesbanken actually went under.

At the Open Economics Skype-chat earlier this month it was agreed to take the first steps for creating a Failed Bank Tracker for Europe at an upcoming “Data party”:

Join the Data Party

Wednesday 24th October at 5:30pm London / 6:30pm Berlin.

We hope that a diverse group of you will join in the gathering of failed bank data. During the Data Party you will have plenty of chances to discuss al questions regarding bank failures whether they be specific cases. Do not let your country or region leave a blank spot when we draw up the map of bank failures.

At the data party we will go through some of these questions:

  • What kind of failed bank data do we wish to collect (date, amount, type of intervention, etc.)?
  • What are the possible sources (press, financial regulators or European agencies)?
  • Getting started with the data collection for the Failed Bank Tracker

 

You can join the Data party by adding your name and skype ID here.

 

Getting good data: What makes a failed bank?

For this first event collecting data on failed European banks should provide more than enough work for us. At this moment neither the European Commission, Eurostat nor the European Banking Authority are keeping any records of bank failures like in the FDIC in the US. The best source of official European information available is from DG Competition, which keeps track of approved state aid measures in member states in their State Aid database. Its accuracy is however limited as it contains cases from state intervention with specific bank collapses to sector wide bank guarantee schemes.

A major reason for the lack of data on bank failures is the fact that legislation often differs dramatically between countries in terms of what actually defines a bank failure. In early 2012 I asked the UK regulator FSA, if they could provide a list of failed banks similar to the list from FDIC in the US. In a response the FSA asserted that the UK did not have a single bank failures since 2007:

“I regret that we do not have a comparable list to that of the US. Looking at the US list it appears to be a list of banks that have entered administration. As far as I am aware no UK banks have entered administration in this period, though of course a number were taken over or received support during the crisis.”

The statement from FSA demonstrate that, for instance Northern Rock, which brought a £ 2bn loss on UK taxpayers, never officially failed, due to the fact that it never entered administration. The example from FSA demonstrates that collecting data on bank failures would be  interesting and useful.

Earlier this year I got a head start on the data collection when a preliminary list of failed banks, were collected from both journalists and national agencies such as the Icelandic the Financial Supervisory Authority. The first 65 banks entered in the tracker, mostly from Northern Europe are available here.

Looking forward to bring data on failed banks together at the Data Party. 

Energy and Climate Post-Hack News

- March 13, 2012 in Events, Hackathon

**Earlier this month, our [Energy and Climate Hackday](https://blog.okfn.org/2012/02/24/energy-and-climate-hackday-march-3rd/) brought together about 50 people in London and online, joining from Berlin, Washington D.C., Amsterdam, Graz and Bogota.**

With participants working in the private sector, for NGOs, universities and the public sector, we had a good mix of people with different expertise and skills. Some people had some idea on how to communicate some resource scarcity, the threat of climate change or the need to transform the existing energy structure. The challenge for developers was to visualise and present the openly available data – such as the dataset with environmental indicators from the World Bank. It was a great chance to meet and work with people that you don’t meet on a day-to-day basis, and get new ideas and inspiration. The event was sponsored by AMEE, which provides aggregated and automated access to the world’s environmental and energy information, and was hosted at the offices of ThoughtWorks.

Ed Hogg from the Department of Energy and Climate Change presented the Global 2050 Pathways Calculator Challenge . The Global Calculator would show how different technology choices impact energy security and reflect the geographical opportunities and limitations of energy technologies. It could focus on sectors of the economy, on countries and regions, or combine visualisations on both, showing implications for emissions and temperatures.

 

The Carbon Budget Challenge: Because of the controversy around how much each country “should” be emitting into the atmosphere, there are different criteria for determining each country’s share. According to the principle of common but differentiated responsibility in international environmental law: “parties should protect the climate system for the benefit of future and present generations of human kind on the basis of equity and in accordance with their common but differentiated responsibility and respective capabilities.”  (Art. 3 of UNFCCC) So richer countries should bear a higher responsibility in order to ensure equitable access to sustainable development.

But it is not just the current rate of CO2 emissions that is important. Since carbon dioxide hangs around in the atmosphere for 50 to 100 years, the cumulative total emissions from historical data also need to be accounted for. According to the “polluter pays” principle, calculating the historical footprint of each country is an important way of determining each country’s responsibility. The way emissions are calculated also leaves room for scrutiny (and creative data visualisation). According to empirical evidence, the net emission transfers via international trade from developing to developed countries has increased, which poses the challenge of visualising “imported emissions”. The Historic Carbon Budget group worked on visualising historical time series of carbon dioxide emissions and comparing countries relative to the world mean.


Meanwhile, the Future Carbon Budget group worked on visualising how the world would look under different algorithms for “allocating” emissions to countries, where the weightings of each country would vary based on:

* historical emissions or the extent to which past high-emitting countries have “used up” their rights to emit in the future.
* population change and expected population growth and the rights of future generations to development
* capacity of emission abatement based on GDP and resources to invest in research and development of green technologies.

A Contraction and Convergence model, which reduces overall emissions and brings them to an equal level per capita, was put together during the afternoon. Building upon this model, developers designed a visualisation tool where one could input different implementation years, GDP and population growth rates in order to estimate the contraction and convergence path.

The Phone App to Communicate Climate Change Challenge inspired one group to show climate data and visualisations on a phone based on where the person is located. It would be either directed at the members of international organisations missions or the general public. A phone app could be useful to communicate the basic climate change facts about particular regions to the staff of international organisations like the World Bank and the IMF, saving them from wading through long and complex reports. For the general public, “global climate change” often seems too complex and distant: a phone app that communicates climate facts based on location, which can be read wherever and whenever you have time, might reach those who would not otherwise connect with these issues.

Deforestation and Land Use Challenge gathered Berlin developers  to create a visualisation of land use and forest area in the world. The Forestogram shows a world map with pie charts of land use (forest, agricultural land and other areas), based on the 5-year FAO data reports since 1990. When selecting “Usage by Kind” the user sees a beautiful peace sign made of the pies of all countries in the world.

Other ideas which we worked on included a “Comparothon” or a web-based application which allows the visualisation of data based on the relative size of bubbles. Data could be compared either for a single indicator across time, or for a single cross-section in one period.

We would like to thank Ilias Bartolini, who was an amazing host at the offices of ThoughtWorks, our sponsors AMEE and all participants who shared their knowledge and skills for a Saturday. Some notes from the Hackday can be found on the Etherpad. Some prototypes are still being developed, so if you have a similar idea and would like to join in, please let us know!

For contact and feedback: velichka.dimitrova [at] okfn.org

Energy and Climate Hackday – March 3, 2012

- February 14, 2012 in Events, Hackathon

On Saturday 3rd March we’re getting together for the Energy and Climate Hackday to data-wrangle and build apps around energy and climate data. All skills and interest groups are welcome: developers, data journalists, economists, climate scientists, environmentalists and interested citizens.

* When? Saturday 3rd March, 11am GMT (12pm CET/6am EST) to ~7pm GMT (8pm CET/3pm EST)
* Where? London, Berlin and Online.
* London – ThoughtWorks Ltd, 9th Floor Berkshire House, 168-173 High Holborn, London, WC1V 7AA.
* Berlin – Open Knowledge Foundation Deutschland Offices – Coworking Space, St. Oberholz
Rosenthaler Straße 72a, 10119 Berlin
* Online – you can also join online from 12pm GMT (13pm CET/7am EST) through Skype and IRC (#okfn or #okfnecon on freenode)
* Who? Anyone! All skills are necessary and welcomed: coding, writing, illustrating, climate modelling or having concerns about the environment.
* How? Sign up on the MeetUp page and on the Etherpad.

### Hackday Challenges:

* Creating an app, which visualises different energy indicators for all countries from the WorldBank database, as in Europe’s Energy.

* Reducing greenhouse gas emissions: DECC 2050 Pathways Calculator with representatives from DECC, who would like to develop an international version of the application.

* Visualisation of deforestation data with a world map, which tracks changes in forest area and land use as well as carbon dioxide emissions… also relating them to economic indicators?

* Your ideas…

### Incentives

A successful prototype will be submitted to the Apps4Climate World Bank competition. The competition calls for an application which:

* is related to climate change; either to raising awareness, measuring progress, or helping in some way to address the development challenges of climate change.
* makes use of one or more of the datasets listed in the [World Bank Data Catalog][data] or [Climate Change Knowledge Portal][portal].
* may be any kind of software application, be it for the web, a personal computer, a mobile handheld device, console, SMS, or any software platform broadly available to the public.

The competition period ends on March 16, 2012 at 5:00 PM EST.

[portal]: http://sdwebx.worldbank.org/climateportal/
[data]: http://data.worldbank.org/

### DataParty prior to the event:

You are also welcome to join the Energy and Climate DataParty on the 29th February to data mine and mash up climate and energy data. Researchers and graduate students who have worked on environment-related topics are also invited to share their dissertation datasets on theDataHub.

If you are interested in co-organising this event and have ideas for challenges, you are welcome to join.

Lunch and drinks sponsored by AMEE and space provided by ThoughtWorks.

     

Open Economics Hackday

- February 1, 2012 in Events, Hackathon

Open Economics Hackday

Open Economics Hackday at the Barbican, London. Photo by Ilias Bartolini.

 

The following post is by Velichka Dimitrova coordinator of the Open Economics Working Group.

It is great to see people coming together and doing something cool on a Saturday. The Open Economics Hackday gathered more than thirty people at the Barbican and online, crafting fancy visualisations, wrangling data and being creative together.

The day was devoted to ideas in open economics, as a transparent and collaborative academic discipline, which presents research outputs in a comprehensible way to the general public.

We aimed at building Yourtopia 2, an interactive application showing the development of Italy on several key social progress indicators over time. Building on preceding experience with alternative non-GDP measures of human development (Yourtopia), the new project’s objective is to show how different progress can be in the separate Italian regions, as Italy is traditionally a country with stark regional inequalities.

Although originally used as a term for the gatherings of computer programmers, the Open Economics Hackday was open to people with different backgrounds and various skills. Programmers were creating bits of code, data journalists were gathering and processing data, economists were making sure the project concept addresses key problems in this field of research.

Would you like to help finish the Yourtopia 2 application? Please join the follow-up online meeting this Saturday at 2pm GMT. Confirm your participation by typing in your name on the Etherpad: http://econ.okfnpad.org/hackathon-jan-2011.

Welcome to Open Economics!

- April 27, 2011 in Announcements, Open Economics

Welcome to the Open Economics Working Group (OpenEcon WG) of the Open Knowledge Foundation!

We want economics to be built on sound, transparent foundations. In particular, it is important that the data and associated analysis (particularly as represented in runnable code) be openly available to all members of society — not just other economists.

This working group therefore exists to:

  • Act as a central point of reference and support for people interested in open data (and code) in economics.
  • Identify relevant projects and practices. Promote best practices as well as legal and technical standards for making material open (such as http://www.opendefinition.org/)
  • Act as a hub for the development and maintenance of low cost, community driven projects related to open material in economics.

Get involved!

The Open Economics WG is driven by the contributions of volunteers like you. If you would like to explore the different ways in which you can participate, please join our mailing list or contact economics [at] okfn [dot] org.