This blog is gone elsewhere!

To facilitate the sharing of contents, I’ve decided to move my personal work blog to Tumblr. Thus, The Rice Cooker has now become The Electric Rice Cooker.

Prototype: Completed Buildings in Hong Kong (2005-2011)

Screenshot at 2012-05-03 08:52:01
Screenshot of completed buildings map in Hong Kong (2005-2011)

The following is a map of completed buildings in Hong Kong from 2005 to 2011, according to data from Buildings Department as processed by us (errors may occur): . It is still being worked on as we speak, so bugs might also be found.

Using a text processing tool on Linux, we extracted the text of every section 5.6 of Buildings Department’s PDF monthly digests (here’s one of the 80-something published between 2005 and 2011).

The data was cleaned with Google Refine to the best of our capacities, and mapped with Google Fusion Tables according to the town planning units. We couldn’t do with addresses were often messy, and those in the New Territories often referred to their lot number only.

There are still lots of semi-open data (because not originally in a raw text format) provided by the Hong Kong government that could use a bit of repackaging job. We’ll get back to you soon on this.

Public services shutdown

Unfortunately, we noticed that we’re not optimized to support external requests to our search tool and other tools mentioned in the previous posts. If you need data, please contact Dr. King-wa Fu directly:

Cleaning HK Gov Data with Google Refine and displaying it with Google Fusion Tables

Last week, I started working with data from Buildings Department, concerning building permits.

Despite the PDF documents being “protected” (preventing copying when opening with Acrobat), you can use a common utility for Linux called lesspipe, a pre-processor for less, that can process many file types into readable text.

Readable does not necessarily mean structured. By no means, the lesspipe output is usable as it (it looks like this after separating the sections and aggregating across different PDF files). With the fantastic Google Refine tool, you can however try your best to parse the data, clean the different fields manually and then even perform geocoding inside the tool (with “Add column by fetching URLs”).

After the cleaning was done (it took a few hours last Thursday, and a few more hours today), I did an export in TSV, and sent it to Google Fusion Tables. I customized the map visualisation with the “month” field, and here is the result:

2005-2011 data for “Table 5.2 Buildings for which building authority has issued demolition consent” from Hong Kong Buildings Department’s monthly digests (alpha)

This is not even close to our final product yet, because the Google Maps JavaScript API V3 now lets you add layers from Fusion Tables data. Effectively, it means that you can build Web applications with different kinds of filters (in pull-down menus, etc.) that dynamically change how the data is displayed. The example here above only shows the single view specified inside Fusion Tables by the owner of the table (me). You could take possibly use the ID of the table (3546150) and make your own visualisation.

For now, the data hasn’t been vetted after refining (maybe the govt will provide us with raw data?), so I would recommend using with high caution as to the validity of the data. It should be largely correct, but some data points may not have been geocoded properly, if at all. For this particular data, corrigendums to Buildings Department monthly digests are not yet taken into account.

Here is another Google Refine + Google Fusion Tables trick on Hong Kong government data:

Map for data from “Short Term Tenancy (STT) Tender Forecast” from Hong Kong Lands Department (alpha)

These are the Short Term Tenancy (STT) Tender Forecast from Lands Department. They are the sites for sale on short term tenancy, for a few years, for uses such as car parks. The color code on this custom map is based on the square meters area of each site for sale (from purple 0-1000 sqm to red for 5000+).


A year and half ago, I did a project called Twitterball for one of my classes at PolyU on information architecture. It played on the idea of visualising the frequencies of tweets in time.

I quickly remashed Twitterball into Weiboball, using data from our archive and the search engine built with WeiboScope Search. The result is Weiboball and Weibobubble.

Get all the reposts and comments of a Sina Weibo post: Introducing a new service from JMSC

We belatedly announce a new service to retrieve all the comments or reposts of any given Sina Weibo post. The service will be very useful for researchers who want to study the contents of chatter surrounding any single post. We created a Google Form to submit posts to the system:

A Weibo ID is a 16-digit long numerical identifier. One way to find a post ID is to use one of our tools: WeiboScope and WeiboScope Search both expose the weibo ID). If you found your post via the website, once you are on the single post website (such as this one), go to the source code (press Ctrl-U if you are on Google Chrome) and do a Find (Ctrl-F) on “mid=”. The first digit starting with a 3 you find following “mid=” is your post ID (in the example we gave it’s “3433594570011824”).

Once you have your bunch of IDs, you can paste them, one per line, into the form and wait. The program running on our server will start to collect the posts using the Sina Weibo Open API and send you an e-mail that your job was queued. If the job is successful, you will get a second e-mail that tells you that all is ok, along a link to a zip file with your results in CSV format.

Try it and tell us if it works for you!

(I had the inspiration to write this service from the time when I was working in bioinformatics in the mid-2000s and where those sort of tasks, to find patterns in DNA or protein sequences, etc., normally took more time than a web user can wait for.)

Sina Weibo: Zombie accounts, slain entries, and now, ghostly posts

While inspecting a potential hiccup to our Sina Weibo deleted posts monitoring system (see the ASL / see article explaining the method), we discovered the occurrence of a post that was seemingly deleted, but which still occasionally appeared on the API.

It was made as a repost made on March 2nd by David Bandurski of CMP, and was a repost of a Hu Shuli post commenting on the Wang Lijun incident on the same day. The post was archived, but can no longer be found on David’s timeline. Ms. Hu’s post was still alive on Weibo when I checked (and we have it archived, in case).

The strange thing was that this post was not marked as “permission denied” and was fully available when queried with the “statuses/show” function on the Weibo API (link — requires login). On the other hand, the user_timeline function, which lists the latest 200 posts made by a given user, reports that the post no longer existed.

Even more bizarre, still, when you searched on for David’s posts between the date range containing March 2nd, you’d see that the search page claims of 5 results, whereas only 4 are actually returned! This is shown in this following image:

For our monitoring tool, specifically the user_timeline function that casts the net for deleted posts, we have been using the first version (V1) of the API (because of rate limiting issues). V1 seems to me like the abandoned entrance for a data store that is more and more complex in its layers of visibility (and invisibility). These layers don’t seem supported with 100% fidelity on V1, and will return copies of user timeline that sometimes contained the “ghostly” post, and sometimes just won’t (probably depending on which physical server you end up accessing).

Not only there are “permission denied” or “weibo does not exist” posts, as described in our previous post about the methods behind the tool, but there are now posts wavering between a state of publication or non-publication (at least from the website’s point of view, they were dead).

Anecdotally, I noticed that “permission denied” posts usually meant that their dependent posts (by people who repost) would also be marked as deleted with “permission denied” as well. In this case of the Hu Shuli post, the original post was alive and well, while the repost was deleted.

China News Archive

Ever looked for an automated archive of Chinese news websites? For months, we’ve been collecting screenshots and HTML snapshots of up to 20 websites based in China or covering China. We now have a webpage for it.

The screenshots are classified by news source and with a minimalistic (if not just minimal) interface, organised by day and regrouped by month. For instance, you could go to the QQ News archive, an archive for February 2011, and a particular link to today.

There’s also a version for accessing navigeable HTML pages when available.

Spam spam spamalot

I’ll start filtering weibos that contain links with and It seems like most containing such links go to spam-like galleries. Bad, bad, mega-bad. The amount of spam is just ridiculous today.

How do you catch and archive deleted posts on Sina Weibo?

It’s the holy grail of any media researcher working on China: how do you quantify contents removal from social media services such as Sina Weibo? Here at JMSC, we’ve been developing tools to scour and assess social media of all sorts for the purpose of researching online media in Hong Kong and mainland China. (This is the same project that generated WeiboScope, for those tuned in.)

Screenshot at 2012-02-08 11:56:04

With the extensive archive of weibos we’ve accumulated so far and mechanisms underlying its retrieval, we were able to develop a routine that finds and marks deleted posts (method explained here).

The result is an archive of deleted posts (and the CMP’s Anti-Social List). Not only is it possible to find a large number (which is not exhaustive, we admit) of such posts within the day of their doom time, but also be able to presume with clear-cut evidence whether these deleted posts were removed by the user itself or simply deleted by system managers (we first noticed the difference in August 2011…).

As described in a post last week, the idea behind this archive is simple and straightforward to implement, once you’ve got the infrastructure.

A previous copy of the user timeline, containing all posts

A current copy of the user timeline, with a missing post

Both copies of a user timeline (post IDs extracted from the full JSON response) are obtained during two consecutive API calls, which may span a few minutes or several hours. The smaller this interval between pollings, the more precise would the routine be in finding the exact removal time (and the chances of missing something, smaller too).

A post is found to be deleted when you could see it in the previous version, but not the current one. Since we keep a copy of every post we see, we simply mark this post, and can then view them all in a custom webpage. Easy enough, right?

Screenshot at 2012-02-08 12:14:35

Screenshot at 2012-02-08 12:14:26

The previous two images show you exactly what, from a programmer’s point of view, Sina Weibo returns us for two different kinds of deleted posts. The former, with an API response “weibo does not exist”, identifies a post that was presumably deleted by the user. The latter, which returns “permission denied”, is presumably a post deleted by the system.

We don’t know the intention between both types of messages, but we can guess based on what their contents are generally. The first are generally made of spam-like posts that would be deleted on any online social network in the world. The second seem to provide more legitimate contents, including some made by so-called VIP users verified by Sina (we only check a 2500-odd sample of users, so can’t really infer on their representation).

This feature is indeed powerful, because it finally puts a number on post removal on Sina Weibo. (We computer science majors strongly dislike conclusions not based on numbers and data.)

It is however currently impossible to tell with certainty what gets deleted (versus what’s not), since our user sample is strongly biased towards public commentators, and perhaps because the number of posts found is still extremely small.

What it does give is an understanding of how post removal works, how much time it usually takes for something to be removed, and whether reposts of posts or just reposts but not original posts, get deleted (in fact, it happens). It’s a privileged peek, indeed, at what’s going on on the Chinese Internets, right here, right now.

Until we accumulate a substantial archive to do anything useful, our colleagues at China Media Project have started compiling and explaining deleted posts on their Anti-Social List.