Sina Weibo deleted posts archive

Since Sina Weibo has a pretty good API, and since we do download lots of data every day, it just makes good sense to keep an archive of deleted posts.

The strategy is very straightforward and only incurs a negligible extra number of hits against the API:
– Take the statuses/user_timeline function for each user in your list (we have 2,500 in a sub-list).
– Extract the IDs of all 200 posts in the response and save as a text file, one ID per line. They are already ordered chronologically.
– You should have a previous list of IDs. Use diff to compare both files.
– Loop through the output of the diff. Mark all the IDs that appear in the previous version, but not the new one.
– Those IDs are the deleted posts.
– Mark them, and send your alert, etc. (We also hit the API again on statuses/show to double-check if the post was really deleted.)
– Overwrite the old ID list with the new one.
– Repeat whenever you can fetch a new version of the timeline (you might be rate-limited by Sina if you do it too often).


Searching our Sina and QQ Weibo archive

Screenshot at 2012-01-19 17:15:45

We had a search engine built a while ago for Sina Weibo archive, and since yesterday, also for the QQ Weibo archive. We use Lucene as the indexer (to do quick full-text searches) and then store all linked information in our standard database. The difference with the real search engines provided on the Sina and QQ Weibo websites is that we don’t currently implement any weighing, and the results are just everything we got, ordered by publication date.

We index at every four hours, so there’s at least a 30 minutes delay, and at most around 4 hrs 30 minutes. There’s paging, too. Because we’re not Google, be sure to understand that queries normally take up to 1 minute to run (more if there’s lots of activity on the server). The search by region / province on the Sina search is also uber-slow.

Cool feature: you can link directly to searches! For instance, if you were interested in racing celebrity Han Han (韩寒) who has been under fire recently, you may use a link such as these:
http://research.jmsc.hku.hk/social/search.py/qqweibo/?q=韩寒
http://research.jmsc.hku.hk/social/search.py/sinaweibo/?q=韩寒

Other cool feature: Google Translate! Write your search query in your language, and behind the scenes, we’ll try to send a query to the Google Translate API. You’ll know whether it worked when you get your results.


Ma Ying-jeou in pictures on WeiboScope

When you check out WeiboScope today, what you will notice are Yao Ming with sleeping delegates at the Shanghai CPPCC and Ma Ying-jeou, Taiwan’s president re-elected for a second 4-year term on Saturday (there’s also this weird meme of Zhou Qifeng, Peking University’s president, grinning uncontrollably alongside Li Keqiang, China’s Vice-Premier).

mayingjeou

But what really caught my eye were all the photos sitting at the top of our Sina Weibo data stack, which probably ranges in the thousands when you look at the bottom of the stack. The image wall here above was generated using the image search portion of WeiboScope. There’s one particularly making rounds of Ma in the US with his future wife, Zhou Meiqing.


Le bogue de l’an 2012

In case people noticed, the quality of our WeiboScope declined quite a bit towards the end of last week. It was just caused because of the passage to the first ISO week of 2012 (which started on Monday). Consequently, only most popular posts made in 2011 counted. We didn’t lose anything, and things are back in track.

Featured today on WeiboScope:
The rumoured coup in North Korea
36 years since the death of Zhou Enlai
Some corrupt officials at the D & G in Hong Kong?


WeiboScope: image search by keyword

weibosearch

You might have heard of WeiboScope for its display of most important images by a sample of users we selected. “So what?”, some users have asked. WeiboScope is a suite of visualisation tools for an archive of Sina Weibo posts that we collect and store on a local database, which may currently range in the 2-3 million per week.

But the power of WeiboScope is not this particular visualisation (because there are many of them), but rather the data underneath that sustains it. Rather than let Sina Weibo dictate the way the data produced by users should be displayed, we borrow a bit from the open data movement and repackage posts in ways that may be a bit more useful to users. This is how a WeiboScope search by image came to be.

Consider these current use case scenarios:

1- A non-Chinese reader would like to know what the Chinese Weibosphere is now thinking about the death of Kim Jong-il. They can decide to type the Chinese name of Kim Jong-un in the search bar on Weibo.com and find a list of about 25 weibos. But because they are unable to read, they rely only on images. They feel lost, and give up on Weibo (for the day).

2- A person who has a native level of Chinese is doing research on suicide. Some cases are reported to be made viral on the Internet, sometimes because of the fake attention-seeking nature of them, or sometimes because their causes provoke deep societal debates. The researcher searches on the search bar on Weibo.com, finding sometimes irony, and some irrelevant news. It is hard for him to assess the importance of such case with respect to others within a certain period of time.

Now, consider that we had a sample of all Weibos ever produced and that our search engine is neutral as to what gets shown and what does not.

Scenario 1: Using the image search on WeiboScope, you can now find that one of the most popular images used in posts was this one. But then, by visual elimination, you may also notice some more odd pictures such as this one speculating on the younger Kim’s Christmas activities.

Scenario 2: Using the image search on WeiboScope, the researcher searches the word “suicide”. In March 2011, we tried this with an early version of this tool. Just by curiousity, we heard of this schoolchildren suicide case in Fujian through the popular image aggregation. At this point, we only saw one post that made it to viral level. We were curious of the impact of this case on the Chinese Internet, so we searched the characters for “suicide” on the search engine. The result? About 80% of the recent posts with the characters for “suicide” were related to the Fujian case.

The WeiboScope image search demonstrates that when you are allowed to mash and mix, and remix data, it may lead to some discoveries and realizations that may not have been made possible otherwise.

http://research.jmsc.hku.hk/social/obs.py/sinaweibo/#search

(For non-Chinese writers, the engine supports some automated Google Translate translation! For people searching in Chinese characters, please use quotes around your characters.)


The trouble with popular users…

Screenshot at 2011-11-17 09:47:06

At some point in our research project, it was a good idea to take all the users with more than a certain arbitrary large number of followers (say, 1000) and download their posts and analyze them. This doesn’t always seem to be the case anymore. Results are variable depending on the days.

We are set to release WeiboSphere, but will wait a little before pushing it. Right now, we’re taking every user with 1000 or more followers and get all their recent posts from the API. We aggregate and produce an unfiltered (at least not with a human filter) classification of the most popular posts by 24 hours, 48 hours, week, two weeks and one month.

Alas, in the last two days, all we’re seeing are female body parts, shoes and celebrities who returned to an incredibly thin size after a pregnancy.

The hope for now, until we improve the filters, is that we can see posts such as this one on an abducted girl in Guangzhou, posted yesterday morning.


Spawn more overlords?

Lucene -> Daemon

One of the biggest challenges in the project has been I/O. Throughout the networks that we check, we deal with large amounts of data that we need to write and read at every moment.

Lucene is a quick way to search through text, including that in Chinese language. We used to rely on the database to do this, but it turned out quickly to be terribly inefficient. To do a search, you had to visit every row (within the parameters given) and search for whether a term appeared.

We asked our HKU colleagues in the computer science department for help, namely Reza Sherkat, a former IBM employee, now a post-doctoral fellow with Nikos Mamoulis. He had previously given us advice on inverted indexes, which in a nutshell uses tokens of text (from the weibos, say) as keys in a gigantic array. The values in each element are what were the indexes in the table or type of objects that we are indexing (for weibos, it would be the weibo ID).

So, when you search a word, you effectively only go through a list of unique words/tokens, which returns a bunch of weibo IDs.

The second trick Reza told us about was the use of programs running in the background, or what are commonly called “daemons”. Like daemons, they are always there, waiting for a program to call it. A use we could make (or should make) would for instance be to keep a list of user IDs in memory. If you want to know whether a weibo was made by a user, no need to go to the database to check. You can do all of that in memory.

There are probably some more clever uses, such as for counting or going through large numbers of items.

It is known for instance that for Google and Facebook to achieve their levels of efficiency that all the data that passes through in fact just stays in memory. And the problem with memory is that it requires an electric current to stay alive. A power outage (which we think should never happen) and the data dies.

Operating in memory (in RAM, that is) is much much faster than having to fetch from a disk. It should make a difference, and we shall try it on our 48Gb of RAM.


What an inconsistent API…

The Sina Weibo API will not always return you what you expect. The web interface is one way to access the data posted on Weibo, but the API (application programming interface) is what programmers and applications will talk to. If a post is gone (for whatever reason) from the Weibo website, is it also really gone from Weibo’s databases and gone if you wanted to access it through the API?

Take the example of post #3351528048407216, made by a user called 北京徐晓, an author living in Beijing. It was posted on Monday night, just passed midnight.

The microblogger wrote: “太好玩了,党报《光明日报》网站发表文章,抨击骆家辉轻车简从的背后,是资本主义及西方价值观的渗透,是美国的“新殖民主义”、“文化殖民主义”的体现。恼羞成怒挂不住了不如直接说,何必这么牛头不对马嘴的瞎拽呢?”. It google-translates to “Too much fun, the party newspaper “Guangming Daily” Web site published an article criticizing Locke pomp behind the Western values ​​of capitalism and the infiltration of America’s “new colonialism”, “cultural colonialism” is all about. Angry embarrassing as a direct say, irrelevant of the blind so why pull it?”

The article (our snapshot) was one of the most popular in the last 24 hours. Traces of the post cannot be found on the user’s timeline (see screenshot): there is now a gap between 00:37 and 00:48, whereas the post was made at 00:46 on August 29th.

The following screenshot shows how it now appeared on the site of one of the users (we counted 27,536 posts in our archive so far) who reposted the post in the meanwhile:

Screenshot-天涯赵瑜的微博 新浪微博-隨時隨地分享身邊的新鮮事 - Google Chrome

The message on the website is “該微博已被刪除”, which is “This Weibo has already been deleted” (example here). It’s different from the message “此微博已被原作者刪除”, which is “This Weibo has been deleted by the user” (example here), and which may also appear on your timeline when a post you reposted was deleted by the original user (but your message remains intact).

What is it now, if you take the ID of the post (3351528048407216) and query it against the API (link, may not work if not logged in Weibo)? You get that the post is still accessible from a programmers’ standpoint:

EDIT 2012-02-02: It seems like Sina has changed the deleted posts error message. From the normal website, self-deleted and presumable system-deleted posts are indistinguishable now. But if you look at deleted posts through the API (using the statuses/show function), they are definitely not: a self-deleted post says “weibo does not exist” and the system-deleted posts says “permission denied”. We just started investigating different deleted posts through a fully automated method.

Screenshot-api.t.sina.com.cn-statuses-show-3351528048407216.json?source=4280451947 - Google Chrome


Has Ai Weiwei been a popular topic on Sina Weibo?

According to this pic, it would seem so. But upon further inspection, only the first nine posts on this mosaic were made in May or after. Most came from early April when Ai was arrested at Beijing airport.

This is a search tool that we use to go through our weibo database, but only using posts with pictures attached to them. We had performance issues at adding new entries for two weeks due to the ongoing indexing. Nonetheless, the takehome message is that there were lots of posts in the days following the arrest, but not much since then. Since the archive is not up to date, there’s no indication whatsoever about the popularity of the topic now. (At least anecdotically, people have been talking about AWW on Weibo.)

(Note: I sampled a few AWW posts from the archive and when their actual page on weibo.com, you can’t find because they were presumably deleted.)


Indexing a table containing Chinese text with tsearch2 and bamboo (can take forever)

Screenshot-csam@lantau: ~-data-sinaweibo-mostretweeted

Phew, it’s been more than three months since the last entry. We are still continuing our data collection of Sina Weibo and it has become critically important to index.

What’s indexing? Imagine individual weibo posts as files. Without indexing, it’s like having your files just thrown into a mound of other files, in no particular order. That’s a bit daunting if you now have stored over 100 million entries. An index basically orders the weibos using a particular column as the guide. We chose a column that we frequently use for searches, such as the creation date of a weibo (a microblog post) in our example. The database creates an “index” that uses a particular data structure to speed up search and retrieval of entries, such as the binary tree or a hash function.

An index would typically take a lot of physical storage space. For instance, the data of these 100 million posts take up 15 GB, but the indexes (on the id, created_at, retweeted_status and user_id fields) take about 31GB all together.

However, we may also be interested in searching the weibos by keyword. Ordering weibos by their text field alone would not produce very efficient results. If you search for a keyword like 平安 (peace), you would basically have to go through all 100 million entries’ text fields to find the occurrence of these characters. A regular binary tree or hash index could only help you find entries containing “平安” if it started with these characters.

That’s where indexing techniques specially for full text search come into play. One of them is the inverted index, which is available with Tsearch2. The strategy is to go through all your entries, tokenize the text field you want to search, and store the token with identifiers for the rows where these tokens appear. For the text “我喜歡平安” (I like peace), you could obtain three tokens “我” (I), “喜歡” (like) and “平安” (peace).

Now, “我” is a very common word in Chinese, and is considered a stop word in natural language processing parlance (see Chinese stop words). This means it is so common that it is not very useful to keep track of its occurrence (it wouldn’t be efficient either, because a majority of posts would contain it).

To repeat, the index is basically a list of keywords (tokens) on one side, and some list of identifiers of rows that contain this keyword. So, effectively, when you search for posts containing “平安” the next time, instead of going through all the rows and all the text entries of the 100 million posts, you instead search in the index for “平安” and get the list of posts containing it. The keyword search will be much faster, but it also means updating the index every time you insert or update an entry. There’s a give and take.

That said, I have now been running the backlog of rows to index for almost two weeks now! That’s 24/7 for the past 14 days. The segmenting of Chinese sentences (using Bamboo) is CPU-intensive, but having to read and write through all 100 million entries and create the index is probably the bottleneck. It may never finish (or not before another couple of weeks), so I will probably quit the task and archive the old posts we’re not using any more. I may need to do an estimate of how long indexing the entire table would take, as there is no way to simply estimate the progress.