Archive for the 'Economics' Category

Pretty Mapping of Australia using R

January 5th, 2017

I’ve recently been experimenting with Data Visualisation in R. As part of that I’ve put together a little bit of (probably error ridden and redundant) code to help mapping of Australia.

First, my code is built on a foundation from Luke’s guide to building maps of Australia in R, and this guide to making pretty maps in R.

The problem is that a lot of datasets, particularly administrative ones, come with postcode as the only geographic information. And postcodes aren’t a very useful geographic structure – there’s no defined aggregation structure, they’re inconsistent in size, and heavily dependent on history.

For instance, a postcode level map of Australia looks like this:

Way too messy to be useful.

The ABS has a nice set of statistical geography that will let me fix this problem by changing the aggregation level, but first I need to convert the data into another file.

Again, fortunately the ABS publishes concordances between postcodes and the Statistical Geography, so all I need to do is take those concordances and use them to mangle my data lightly. First, I used those concordances to make some CSV input files:

Concordance from Postcode to Statistical Area 2 level (2011)

Concordance from SA2 (2011) to SA2(2016)

Statistical Geography hierarchy to convert to SA3 and SA4

Then a little R coding. First convert from Postcode to SA2 (2011). SA level 2 is around the same level of detail of postcodes, and so the conversions won’t lose a lot of accuracy.And then convert to 2016 and add the rest of the geography:

## Convert Postcode level data to ABS Statistical Geography heirarchy

## Quick hack job, January 2017

## Robert Ewing

require(dplyr)

## Read in original data file, clean as needed.

## This data file is expected to have a variable 'post' for the postcode,

## and a data series called 'smsf' for the numbers.

data_PCODE <- read.csv("SMSF2.csv", stringsAsFactors = FALSE)

## Change this line depending on your data series.

## This code is designed to read in only one series. If you need more than one,

## you'll need to change the Aggregate functions.

## Change this line to reflect the name of the data series in your file

data_PCODE$x <- as.numeric(data_PCODE$smsf)

data_PCODE$smsf[is.na(data_PCODE$x)] <- 0

data_PCODE$POA_CODE16 <- sprintf("%04d", data_PCODE$post)

## Read in concordance from Postcode to SA2 (2011)

concordance <- read.csv("PCODE_SA2.csv", stringsAsFactors = FALSE)

concordance$POA_CODE16 <- sprintf("%04d", concordance$POSTCODE)

## Join the files

working_data <- concordance %>% left_join(data_PCODE)

working_data$x[is.na(working_data$x)] <- 0

## Adjust for partial coverage ratios

working_data$x_adj = working_data$x * working_data$Ratio

## And produce the SA2_2011 version of the dataset. Data is in x.

data_SA2_2011 <- aggregate(working_data$x_adj,list(SA2_MAINCODE_2011 = working_data$SA2_MAINCODE_2011),sum)

## Now read in the concordance from SA2_2011 to SA2_2016

concordance <- read.csv("SA2_2011_2016.csv", stringsAsFactors = FALSE)

## Join it.

working_data <- concordance %>% left_join(data_SA2_2011)

working_data$x[is.na(working_data$x)] <- 0

## Adjust for partial coverage ratios

working_data$x_adj = working_data$x * working_data$Ratio

## And produce aggregate in SA2_2016

data_SA2_2016 <- aggregate(working_data$x_adj,list(SA2_MAINCODE_2016 = working_data$SA2_MAINCODE_2016),sum)

## and finally join the SA2 with the rest of the hierarchy to allow on the fly adjustment.

statgeo <- read.csv("SA2_3_4.csv", stringsAsFactors = FALSE)

data_SA2_2016 <- data_SA2_2016 %>% left_join(statgeo)

The end result gives you a data set that can be converted to a higher level. Here's the chart above, but this time using SA3 rather than postcodes:


The exchange rate according to today’s Apple announcements

October 21st, 2009

Apple announced some shiny new things today. Given that the Australian dollar is around 92 cents, we’d hope for a good exchange rate. What has Apple actually done?

  • iMac 22-inch: US $1,199, AUS $1,599
  • iMac 27-inch: US $1,699, AUS $2,199
  • Mac Mini Base: US $549, AUS $849
  • MacBook: US $999, AUS $1,299
  • MagicMouse: US $69, AUS $99
  • Apple Remote: US $19, AUS $25

It’s important to remember that the US prices are before sales tax, so I’ve taken the GST (10%) off the Australian prices to work out the exchange rates.

When we do that, the exchange rate ranges from 71 cents (on the Mac Mini, which is sad because that’s the one I want to buy) to 85 cents on the MacBook. The average is pretty much 81 cents. Clearly there’s some rounding going on to hit nice price points.

Why is it so much lower? Because Apple (like most other companies) isn’t silly. They know that the exchange rate fluctuates, so they don’t set prices based on what it is this week. Rather they look at longer run averages. And if you look at the average exchange rate over the last six months (excluding October, as they would have set prices a few weeks ago) it’s also around 81 cents.


Hands up if you can see the problem

February 27th, 2008

Net Neutrality is one of the biggest hot button issues among the nerd illuminati of the Internet right now. The simple question is whether all internet bits are equal, or should ISPs be allowed to privilege some bits (from their customers or people who pay them) over others.

There are some side issues here, but a big part of it is peer to peer. Which brings me to this story from today that online video distributors can save a lot of money by using peer to peer protocols

In the example given, Democracy Now saves $1,000 (of a $1,200 bill) by using BitTorrent. My question is – who ends up paying that $1,000? If we assume (and it’s not a great assumption) that everything is competitive, then that $1,200 represents the cost of pushing that many bits to end users. If it goes down, then it must mean that $1,000 worth of bits are now being pushed by someone else – in this case, the upstream bandwidth from the users.

So who pays?

At first, probably the ISP of the end users. Their bandwidth out gets used up, costing them money.

They’ll pretty quickly pass that on to the end users. Which means they’ll increase prices for everyone.

So what’s DemocracyNow really doing here? They’re pushing the costs of distributing from themselves on to end users. Which, due to the way pricing is set up, will be borne equally by everyone, regardless of how interested they are. In fact, people who have no interest at all in the video probably end up paying for this too.

I’m not arguing against net neutrality – there are other reasons why it’s a good idea. This is probably more an example of how the pricing for internet access is set up wrong – flat rate charges create strange incentives across the Internet, not just for the end users.

But that $1,000 saving? That doesn’t exist. You’re just making other people pay it.


Why isn’t the computer game business more like films?

February 26th, 2008

Just last week Electronic Arts offered $US 2 billion to purchase another publisher, Take-Two.

This is part of a continuing trend in the computer video game business, with the really big publishers consolidating. In some sense, this is a lot like the film business – the big studios make up a very substantial proportion of the total turnover of the industry.

But the big difference is with the next level down. In the film industry almost everyone is a contract player – directors, writers, and actors all move from studio to studio, only settling at one studio for the amount of time to make one or two projects. But in the games industry most of the ‘talent’ is permanently employed, staying with the company for many years.

This is odd – in many respects the requirements are the same. Video games are expensive undertakings these days – $10-20 million to produce, millions more to market and distribute. This is still well short of the cost of a major movie, but the gap is closing.

In fact, the structure of the video game industry looks a lot like the movie industry of the 1930s. The studio system lasted until the vertical integration of the industry was stopped and a single tycoon (Hughes) stepped in. At present, much like the 1930s film industry, the video game industry has a few stars (which, in this case, are the intellectual properties like Halo or Quake), and most people in the industry are relatively anonymous.

So what will change? My guess is that it will be the rise of the talent.

At present there isn’t a single famous video game writer, and only a couple of famous ‘directors’ (the analogy isn’t perfect, of course). In fact, if you look at the industry’s Game Developer’s Choice Awards the nominations for writing, art direction and so on only mention the game, not the actual writer!

That’s slowly changing. The enthusiast press has been paying attention to the project heads for a while, and is starting to pay a lot more attention to the writers as well.

Once people know the names, the names can ask for more money. A few people can do this now, but as the media pays more attention, more people will become famous (at least in the video game world), and they’ll start to move from project to project in search of better money.

And once that happens, then the game publishers are going to start to look a lot more like Hollywood – they own the IP for some of the series, and they bankroll the whole thing. But the people making the games aren’t usually tied to any one publisher, and they move around a lot more than is the case now.

And this will be a really good thing for games, because the best talent will be recognised appropriately, and the best projects will attract the best people.


Incentives

February 16th, 2008

Valleywag, before per-view incentives were provided to staff:

[Valleywag], after (links are not safe for work…):

Any questions?


Rational economics

January 24th, 2008

I was listening this morning to a recent episode of the great podcast Skepticality, and I was very struck by a question host Swoopy asked of interview-ee Michael Shermer, talking about his new book on economics and psychology:

What do we do […] to make better rational choices and fewer emotional ones.

Dr Shermer gave a good answer about being aware of the tricks marketers play and the findings of economic psychology.

I have a slightly different answer: why should we?

There’s a lot of talk around (especially in the Australian media) about how experimental economics is showing “people aren’t rational”, of limits to rationality. Some of this is very good and interesting. Part of the problem is the word ‘rational’. When most economists use it they’re talking about a very narrow technical definition, that has little to do with the other dictionary meanings. That confuses a lot of people.

But there are also a lot of value judgements tied up in most people’s view of rational. For instance, I want to lose weight, but I also want to eat that chocolate bar. Is it ‘irrational’ if I do eat the chocolate bar? Of course not, it just reflects my preferences or discount rates at the time.

So, if our emotions would lead us to choose one thing, but the ‘rational’ choice is something else, is there any reason to think that it’s always better to choose ‘rationally’? Sometimes, sure. If it makes sense to go to another store for $50 off a $100 iPod, it also makes sense to do it for $50 off a $10,000 TV. But the chocolate bar is still a perfectly reasonable choice to make, even if it’s not what you’d choose from some other circumstance. Preferences don’t have to stand still all the time for people to be rational.