Support Us

You are browsing the archive for External Projects.

Open model of an oil contract

- October 22, 2013 in External Projects, Featured, Open Data, Open Economics

Please come and kick the tires of our open model of an oil contract!

In the next month or so, OpenOil and its partners will publish what we believe will be the first financial model of an oil contract under Creative Commons license. We would like to take this opportunity to invite the Open Economics community to come and kick the wheels on the model when it is ready, and help us improve it.

We need you because we expect a fair degree of heat from those with a financial or reputational stake in continued secrecy around these industries. We expect the brunt of attacks to be on the basis that we are wrong. And of course we will be wrong in some way. It’s inevitable. So we would like our defence to be not, “no we’re never wrong”, but “yes, sometimes we are wrong, but transparently so and for the right reasons – and look, here are a bunch of friends who have already pointed out these errors, which have been corrected. You got some specific critiques, come give them. But the price of criticism is improvement – the open source way!” We figure Open Economics is the perfect network to seek that constructive criticism.

screengrab

Ultimately, we want to grow an open source community which will help grow a systematic understanding of the economics of the oil and gas industry independent of investor or government stakes, since the public policy impact of these industries and relevant flows are too vital to be left to industry specialists. There are perhaps 50 countries in the world where such models could transform public understanding of industries which dominate the political economy.

The model itself is still being fine-tuned but I’d like to take this chance to throw out a few heuristics that have occurred in the process of building it.

Public interest modelling. The model is being built by professionals with industry experience but its primary purpose is to inform public policy, not to aid investment decisions or serve as negotiation support for either governments or companies. This has determined a distinct approach to key issues such as management of complexity and what is an acceptable margin of error.

Management of complexity. Although there are several dozen variables one could model, and which typically appear in the models produced for companies, we deliberately exclude a long tail of fiscal terms, such as ground rent and signature bonuses, on the basis that the gain in reduction of margin of error is less than the loss from increasing complexity for the end user. We also exclude many of the fine tuning implementations of the taxation system. We list these terms in a sheet so those who wish can extend the model with them. It would be great, for example, to get tax geek help on refining some of these issues.

A hierarchy of margins of error. Extractives projects can typically last 25 years. The biggest single margin of error is not within human power to solve – future price. All other uncertainties or estimates pale in comparison with its impact on returns to all stakeholders. Second are the capex and opex going into a project. The international oil company may be the only real source of these data, and may or may not share them in disaggregated form with the government – everyone else is in the dark. For public interest purposes, the margin of error created by all other fiscal terms and input assumptions combined is less significant, and manageable.

Moving away from the zero-sum paradigm. Because modelling has traditionally been associated with the negotiation process, and perhaps because of the wider context surrounding extractive industries, a zero-sum paradigm often predominates in public thinking around the terms of these contracts. But the model shows graphically two distinct ways in which that paradigm does not apply. First, in agreements with sufficient progressivity, rising commodity prices could mean simultaneous rise of both government take and a company’s Internal Rate of Return. Second, a major issue for governments and societies depending on oil production is volatility – the difference between using minimal and maximal assumptions across all of the inputs will likely produce a difference in result which is radical. One of a country’s biggest challenges then is focusing enough attention on regulating itself, its politicians’ appetite for spending, its public’s appetite for patronage. We know this of course in the real world. Iraq received $37 billion in 2007, then $62 billion in 2008, then $43 billion or so in 2009. But it is the old journalistic difference between show and tell. A model can show this in your country, with your conditions.

The value of contract transparency. Last only because self-evident is the need for primary extractives conracts between states and companies to enter the public domain. About seven jurisdictions around the world publish all contracts so far but it is gaining traction as a norm in the governance community. The side-effects of the way extractive industries are managed now are almost all due to the ill-understood nature of rent. Even corruption, the hottest issue politically, may often simply be a secondary effect of the rent-based nature of the core activities. Publishing all contracts is the single biggest measure that would get us closer to being able to address the root causes of Resource Curse.

See http://openoil.net/ for more details.

Fundamental Stock Valuation on an Open Platform

- September 3, 2013 in External Projects, Featured, Open Data

Investors have traditionally relied on Wall Street analysts for projections of companies’ intrinsic values.  Wall Street analysts typically come up with their valuations using Discounted Cash flow (DCF) analysis. However, they do not disclose the proprietary models used for arriving at buying, selling or holding recommendations. ThinkNum has a solution which allows users to build their own models.

A cash flow model is a tool for translating projections of a company’s future operating performance like revenue growth and costs of goods into an intrinsic value for the company. Without viewing the assumptions underlying a model, a leap of faith is required in order to use the model’s outputs. With Thinknum, users can view and change any formula or assumption that drives the valuation. The interactive nature of the application allows users to conduct ‘what-if’ analysis to test how sensitive a company’s valuation is to changes in a specific performance measure.

To get started, all that is needed is a stock ticker. After entering the ticker, Thinknum displays a model using the mean of analysts’ revenue growth projections. We load the historical numbers for the company’s balance sheet, income statement and the statement of cash flows from corporate filings.  We then use the growth assumptions to project how the company’s financial performance will evolve over time and how much value will ultimately accrue to shareholders. Users can modify the model or build one from scratch. Users can also download the models into Excel spreadsheets.

Google DCF 3 Statement Model pictured above is an example of a model I recently built for valuing Google’s stock price. If you disagree with my assumptions of Google’s revenue growth you can simply change those assumptions and compute the new value. DCF models can be used to make rational investment decisions by comparing the model’s intrinsic value to the current market price.

One important caveat is any model is only as good as the assumptions underlying it. We provide data from over 2,000 sources in an attempt to place proper context around companies and help analysts make the best assumptions based on all the information available. ThinkNum users can plot any number in the cash flow models over time. Visualizing numbers over time and comparing metrics across the industry help users gain insight into the company’s historical performance and how such performance might vary going forward. For example, simply type total_revenue(goog) into the expression window to pull up the total historical revenue for Google. You can then click on the bar graphs to pull up the corporate filings used in the charts.

We are excited about the role the web can play in helping us make better decisions by rationally analyzing available data.

The AEA Registry for Randomized Controlled Trials

- July 4, 2013 in External Projects, Featured, Open Tools, Trials Registration

The American Economic Association (AEA) has recently launched a registry for randomized controlled trials in economics (https://www.socialscienceregistry.org). The registry aims to address the growing number of requests for registration by funders and peer reviewers, make access to results easier and more transparent, and help solve the problem of publication bias by providing a single place where all trials are registered in advance of their start.

Screenshot of www.socialscienceregistry.org

In order to encourage registration, the process was designed to be very light. There are only 18 required fields (such as name and a small subset of IRB requirements,) and the entire process should take less than 20 minutes. There is also the opportunity to add much more, including power calculations and an optional pre-analysis plan. To protect confidential and other sensitive design information, most of the information can remain hidden while the project is ongoing.

Please contact support [at] socialscienceregistry.org with any questions, comments or support issues.

Quandl: find and use numerical data on the internet

- July 2, 2013 in External Projects, Featured, Open Data

Quandl.com is a platform for numerical data that currently offers 6 million free and open time series datasets. Conceptually, Quandl aspires to do for quantitative data what Wikipedia did for qualitative information: create one location where quantitative data is easily available, immediately usable and completely open.

A screenshot from the Quandl data page

Open Economics and Quandl thus share a number of core values and objectives. In fact, at Quandl we are working to build part of the “transparent foundation” that is central to the Open Economics mission.

Quandl was invented to alleviate a problem that almost every econometrician knows well: finding, validating, formatting and cleaning data is a tedious and time consuming prerequisite to econometric analysis. We’re gradually reducing the magnitude of this problem by bringing all open time series datasets to one place, one source at a time.

To do this, we’ve built a sort of “universal data parser” which has thus far parsed about 6.4 million datasets. We’ve asked nothing of any data publisher. As long as they spit out data somehow (Excel, text file, blog post, xml, api, etc) the “Q-bot” can slurp it up.

The result is www.quandl.com, a sort of “search engine” for time series data. The idea with Quandl is that you can find data fast. And more importantly, once you find it, it is ready to use. This is because Quandl’s bot returns data in a totally standard format. Which means we can then translate to any format a user wants.
Quandl is rich in financial, economic and sociological time series data. The data is easy to find. It is transparent to source. It can be easily merged with each other. It can be visualized and shared. It is all open. It is all free. There’s much more about our vision on the about page.

Everyday Quandl’s coverage increases thanks to contributions made by Quandl users. We aspire to get to a point where publishers instinctively choose to put their data on Quandl. This has already started to happen because Quandl offers a solid, highly usable and totally open platform for time series data. We will work to perpetuate this trend and thus do our small part to advance the open data movement.

First Opinion series on Transparency in Social Science Research

- June 7, 2013 in Berkeley Initiative for Transparency in the Social Sciences (BITSS), External Projects, Featured, Open Data, Open Economics, Open Research

The Berkeley Initiative for Transparency in the Social Sciences (BITSS) is a new effort to promote transparency in empirical social science research. The program is fostering an active network of social science researchers and institutions committed to strengthening scientific integrity in economics, political science, behavioral science, and related disciplines.

Central to the BITSS effort is the identification of useful strategies and tools for maintaining research transparency, including the use of study registries, pre-analysis plans, data sharing, and replication. With its institutuional hub at UC Berkeley, the network facilitates discussion and critique of existing strategies, testing of new methods, and broad dissemination of findings through interdisciplinary convenings, special conference sessions, and online public engagement.

The first opinion series on transparency in social science research (see: http://cegablog.org/transparency-series/) was published on the CEGA Development Blog in March 2013. The series built on a seminal research meeting held at the University of California, Berkeley on December 7, 2012, which brought together a select interdisciplinary group of scholars – from biostatistics, economics, political science and psychology – with a shared interest in promoting transparency in empirical social science research.

Disclosure and ‘cook booking’

- March 25, 2013 in Contribution Economy, Economic Publishing, Featured, Open Data, Open Economics

This blog post is cross-posted from the Contribution Economy Blog.


Many journals now have open data policies but they are sparingly enforced. So many scientists do not submit data. The question is: what drives them not to submit? Is it laziness? Is it a desire to keep the data to themselves? Or is it something more sinister? After all, the open data rules were, in part, to allow for replication experiments to ensure that the reported results were accurate.

Robert Trivers reports on an interesting study by Wicherts, Bakker, and Mlenar that correlates disclosure of data with the statistical strength of results in psychological journals.

Here is where they got a dramatic result. They limited their research to two of the four journals whose scientists were slightly more likely to share data and most of whose studies were similar in having an experimental design. This gave them 49 papers. Again, the majority failed to share any data, instead behaving as a parody of academics. Of those asked, 27 percent failed to respond to the request (or two follow-up reminders)—first, and best, line of self-defense, complete silence—25 percent promised to share data but had not done so after six years and 6 percent claimed the data were lost or there was no time to write a codebook. In short, 67 percent of (alleged) scientists avoided the first requirement of science—everything explicit and available for inspection by others.

Was there any bias in all this non-compliance? Of course there was. People whose results were closer to the fatal cut-off point of p=0.05 were less likely to share their data. Hand in hand, they were more likely to commit elementary statistical errors in their own favor. For example, for all seven papers where the correctly computed statistics rendered the findings non-significant (10 errors in all) none of the authors shared the data. This is consistent with earlier data showing that it took considerably longer for authors to respond to queries when the inconsistency in their reported results affected the significance of the results (where responses without data sharing!). Of a total of 1148 statistical tests in the 49 papers, 4 percent were incorrect based only on the scientists’ summary statistics and a full 96 percent of these mistakes were in the scientists’ favor. Authors would say that their results deserved a ‘one-tailed test’ (easier to achieve) but they had already set up a one-tailed test, so as they halved it, they created a ‘one-half tailed test’. Or they ran a one-tailed test without mentioning this even though a two-tailed test was the appropriate one. And so on. Separate work shows that only one-third of psychologists claim to have archived their data—the rest make reanalysis impossible almost at the outset! (I have 44 years of ‘archived’ lizard data—be my guest.) It is likely that similar practices are entwined with the widespread reluctance to share data in other “sciences” from sociology to medicine. Of course this statistical malfeasance is presumably only the tip of the iceberg, since in the undisclosed data and analysis one expects even more errors.

It’s correlation but it is troubling. The issue is that authors present results selectively and sadly this is not picked up in peer review processes. Of course, it goes without saying that even with open data, it takes effort to replicate and then publish alternative results and conclusions.

Data Sharing: Poor Status Quo in Economics

- March 5, 2013 in EDaWaX, External Projects, Featured

hare_c_flickrThis article is cross-posted from the blog of the European Data Watch Extended Project

In the context of our research project EDaWaX a new research paper has been published by Patrick Andreoli-Versbach (International Max Planck Research School for Competition and Innovation (IMPRS-CI), LMU Munich, Munich Center for Innovation and Entrepreneurship Research (MCIER)) and Frank Mueller-Langer(Max Planck Institute for Intellectual Property and Competition Law, IMPRS-CI, MCIER).

The paper analyzes the data sharing behavior of 488 randomly chosen empirical economists. More specifically, the researchers under study were chosen uniformly across the top 100 economics departments and the top 50 business schools and randomly within the respective institution. Economics departments were chosen using the Shanghai Ranking 2011 in Economics and Business and business schools were chosen using the Financial Times Global MBA Ranking 2011.

Read the rest of this entry →

Looking again at “Big Deal” scholarly journal packages

- February 18, 2013 in Contribution Economy, Economic Publishing, Featured, Open Access, Open Economics

This blog post is cross-posted from the Contribution Economy Blog.

One of the things pointed to in the debate over market power and scholarly journals is the rise of “Big Deal” packages. Basically, this has arisen as publishers bundle journals together for a single price. Indeed, as the publishers have merged and acquired more titles, these bundled packages have become more compelling with individual journal subscription pricing to libraries rising at a higher rate. This means that libraries with limited budgets are driven to give a greater share of their journal budgets to larger publishers; squeezing out smaller ones. The claim is that this is reducing choice.

While it is reducing choice amongst publishers, Andrew Odlyzko, in a recent paper, points out that “Big Deals” have also increased the number of journal titles available; not just in large libraries but across the board.

Serials

The reason is basically the same reason that is behind the drive towards open access — in electronic form, the marginal cost of an additional journal is zero and so it make sense to provide more journal titles to each library. Moreover, for smaller libraries, the average cost of a journal title has fallen at a faster rate than it has done for larger libraries. In other words, behind the spectre of increased publisher profits and market power, is an increase in journal availability. Put simply, more researchers have easier access to journals than before. This is one case where — if we just consider University libraries — price discrimination (using Varian’s rule) looks to be in the welfare improving range.

But there are, of course, wrinkles to all of this. This says nothing of access beyond Universities which is still an issue both economically and increasingly morally. It also says nothing of the distribution of rents in the industry. Publisher profits have increased dramatically and that money has to come from somewhere.

Odlyzko raises a new issue in that regard: publisher profits are a symptom that libraries are being squeezed. Of course, we know that the share of library budgets devoted to journal acquisition has risen. At the same time, library budgets have fallen although not as quickly as Odlyzko expected a decade ago. The reason is that libraries command attention at Universities. Changes to them are a signal of how quickly changes can occur within Universities. As it turns out, there is not very much. Libraries are centrally located, have nostalgic views in the eyes of alumni donors and hitting their budgets can often be read as a sign of a move against scholarship.

But what publishers are providing now, in terms of electronic access and search, is as much a transfer of functions as it is of money from libraries to themselves. Put simply, publishers are now doing what librarians used to do. They have provided tools that make it easier for people to find information. It is another way machines are being substituted for labor.

The competition between libraries and publishers has implications with regard to how we view alternative journal business models. Take, for instance, the notion that we can have journals funded by author fees and be given open access instead of being funded by user fees. If we did this, then this will just change the locus of the competitive fight between libraries and publishers to involve academics. Academics can legitimately argue that these new publication fees should come from the institution and, where will the institution find the money? In the now relieved library budgets as more journals go open access. So either way, the money for journal publishing will end up coming from libraries.

This is not to say that there is no scope for reducing the costs of journal access and storage. It is surely bloated now as it includes the publisher market power premium. The point is that libraries spent time resisting changes to journal business models as much as publishers did but that seems to have been a political error on their part.

This is all familiar stuff to economists. The flow of money is less important than the structure of activities. When it comes down to it, we know one thing: we can provide a journal system with labor from academics (as writers, referees and editors) and publisher activities when there is enough willingness to pay for all of it. That means we can provide the same overall payment and still, because journals are a non-rival good, have open access. In other words, there is no market impediment to open access, it is proven to be a pure Pareto improvement. The question now is how to do the “Really Big Deal” to get it there.

Joshua Gans Joining the Advisory Panel of the Working Group

- February 14, 2013 in Advisory Panel, Contribution Economy, Featured, Open Access, Open Data, Open Economics

We are happy to welcome Joshua Gans in the Advisory Panel of the Open Economics Working Group.

Joshua Gans

Joshua Gans is a Professor of Strategic Management and holder of the Jeffrey S. Skoll Chair of Technical Innovation and Entrepreneurship at the Rotman School of Management, University of Toronto (with a cross appointment in the Department of Economics). Prior to 2011, he was the foundation Professor of Management (Information Economics) at the Melbourne Business School, University of Melbourne and prior to that he was at the School of Economics, University of New South Wales. In 2011, Joshua was a visiting researcher at Microsoft Research (New England). Joshua holds a Ph.D. from Stanford University and an honors degree in economics from the University of Queensland. In 2012, Joshua was appointed as a Research Associate of the NBER in the Productivity, Innovation and Entrepreneurship Program.

At Rotman, he teaches MBA and Commerce students Network and Digital Market Strategy. Most recently, he has written an eBook, Information Wants to be Shared (Harvard Business Review Press). While Joshua’s research interests are varied he has developed specialities in the nature of technological competition and innovation, economic growth, publishing economics, industrial organisation and regulatory economics. In 2007, Joshua was awarded the Economic Society of Australia’s Young Economist Award. In 2008, Joshua was elected as a Fellow of the Academy of Social Sciences, Australia. Details of his research activities can be found here. In 2011, Joshua (along with Fiona Murray of MIT) received a grant for almost $1 million from the Sloan Foundation to explore the Economics of Knowledge Contribution and Distribution.

Dutch PhD-workshop on research design, open access and open data

- February 1, 2013 in Economic Publishing, EDaWaX, External Projects, Featured, Open Access, Open Data, Open Economics

This blog post is written by Esther Hoorn, Copyright Librarian, University of Groningen, the Netherlands.

If Roald Dahl were still alive, he would certainly be tempted to write a book about the Dutch social psychologist Diederik Stapel. For not only did he make up the research data to support his conclusions, but also he ate all the M&M’s, which he bought with public money for interviews with fictitious pupils in fictitious high schools. In the Netherlands the research fraud by Stapel was a catalyst to bring attention to the issue of research integrity and availability of research data. A new generation of researchers needs to be aware of the policy on sharing research data by the Dutch research funder NWO, the EU policy and the services of DANS, the Dutch Data archiving and networked services. In the near future, a data management plan will be required in every research proposal.

Verifiability

For some time now the library at the University of Groningen is organizing workshops for PhDs to raise awareness on the shift towards Open Access. Open Access and copyright are the main themes. The question also to address verifiability of research data came from SOM, the Research Institute of the Faculty of Economics and Business. The workshop is given as part of the course Research Design of the PhD program. The blogpost Research data management in economic journals proved to be very useful to get an overview of the related issues in this field.

Open Access

As we often see, Open Access was a new issue to most of the students. Because the library buys licenses the students don’t perceive a problem with access to research journals. Moreover, they are not aware of the big sums that the universities at present pay to finance access exclusively for their own staff and students. Once they understand the issue there is a strong interest. Some see a parallel with innovative distribution models for music. The PhDs come from all over the world. And more and more Open Access is addressed in every country of the world. One PhD from Indonesia mentioned that the Indonesian government requires his dissertation to be available through the national Open Access repository. Chinese students were surprised by availability of information on Open Access in China.

Assignment

The students prepared an assignment with some questions on Open Access and sharing research data. The first question still is on the impact factor of the journals in which they intend to publish. The questions brought the discussion to article level metrics and alternative ways to organize the peer review of Open Access journals.

Will availability of research data stimulate open access?

Example of the Open Access journal Economics

The blogpost Research data management in economic journals presents the results of the German project EdaWax, European Data Watch Extended. An important result of the survey points at the role of association and university presses. Especially it appears that many journals followed the data availability policy of the American Economic Association.

[quote] We found out that mainly university or association presses have high to very high percentages of journals owning data availability policies while the major scientific publishers stayed below 20%.

Out of the 29 journals with data availability policies, 10 used initially the data availability policy implemented by the American Economic Review (AER). These journals either used exactly the same policy or a slightly modified version.

For students it is assuring to see how associations take up their role to address this issue. An example of an Open Access journal that adopted the AER policy is Economics. And yes, this journal does have an impact factor in the Social Science Citation Index and also the possibility to archive the datasets in the Dataverse Network.

Re-use of research data for peer review

One of the students suggested that the public availability of research data (instead or merely research findings) may lead to innovative forms of review. This may facilitate a further shift towards Open Access. With access to underlying research data and methodologies used, scientists may be in a better position to evaluate the quality of the research conducted by peers. The typical quality label given by top and very good journals may then become less relevant, over time.
It was also discussed that journals may not publish a certain numbers of papers in a volume released e.g. four times a year, but rather as qualifying papers are available for publication throughout the year. Another point raised was that a substantial change in the existing publication mechanics will likely require either top journals or top business schools to lead the way, whereas associations of leading scientists in a certain field may also play an important role in such conversion.