This error occurs when you try to generate a business calendar with bcal create on a date variable with missing values. Simply drop or exclude the missing values to solve the error.
Category Archives: General
交通擠塞問題
「理大土木及環境工程學系副教授熊永達認為,車位數目下降並非壞事,可令駕駛人士減少駕車,有助解決交通擠塞問題;以倫敦為例,車輛進入倫敦市中心不但要面對昂貴的停車費,還要支付塞車費;反觀香港10年來沒有檢討過泊車位情況,亦未有控制私家車增長或大減私家車車位的氣魄,令本港交通擠塞問題持續。」
我不太認同熊教授的看法。
第一,車位數目下降長遠或可減少車輛數目,但短期卻會有反效果,因為車輛會改為留在馬路上。貨車和旅巴就是一顯著例子:兩類車位嚴重不足,結果大家都知道就是週街落貨等客。警方之前亦承認,甚少驅趕上述車輛的原因是驅趕不會令車輛減少,反會令區內繞圈的車輛大增,得不償失。
第二,倫敦的做法是為了減少由市外車輛,但香港可以怎樣效法?要知香港其本上沒有市外市內之分,中環觀塘粉嶺通通都會在繁忙時間塞車。而在繁忙時間行駛的車輛,不是貨車就是為上下班。前者要在辦公時間送運,而後者…嘿,叫更多人改乘公共交通,你是想等多幾多班地鐵?就算真的要搞, 與其改花錢建造收取塞車費所需的基建,倒不如增加柴油稅或貨車牌費來得簡單。
閱讀風氣與求職
姑勿論文章的背景資料是否正確,香港的年輕人明顯普遍不愛閱讀。閱讀是自學的表現,而自學能力可說是知識型經濟最講求的能力之一—中國可以靠出口廉價勞力,香港卻不能。近年本港年輕人求職愈見困難,自學能力不足相信是因素之一。
讓我以編寫電腦程式能力為例:在互聯網普及之前,有志編程者很多都是靠看書自學,不少都學有所成。但時至今日,雖然從互聯網彈指間就可以找到大量的編程教學和範例下載,香港年輕一輩懂編程的比例卻未見上升。在科技不斷取代人力的現實下,不懂編程者可擔當的工種自然越來越少。
Facebook收購Whatsapp
其實上星期就已經想談談Facebook收購Whatsapp,因為從教學角度看這實在是一個很好的案例。
首先,到底190億美元的收購價有多高?以Whatsapp一年一人一美元的收費模式,未計營運成本都要190億人年才能回本。就算有將來有十多億用家,這價錢還是需要數十年後才有賺。
我們也可以從另一個各度,看看科技業的兩大巨無霸—Apple與Google—在2013財政年度到底用了多了錢於收購上:Apple以14.07億買下最少七間公司,Google則以14.48億買下18間。這也就是說,Facebook在Whatsapp一間公司上所用的收購資金,比Apple加Google全年合起來還要多六倍。
Facebook願意以天價買下Whatsapp,當然是看中其4億5千萬活躍用戶。事實上,在發展一日千里的網絡,Facebook已經「out」了。只要觀察一下身邊的年輕人,就會發現他們對Whatsapp的投入程度遠超Facebook。去年從中大經濟系畢業的梁永行同學,畢文論文收集了同系同學用Whatsapp等即時短訊軟件的習慣。他除了證明各短件的新增用戶數目和現有用戶數目是有著正關聯—這是經濟學中所謂的「Network Effect」—還發現Whatsapp的增長放緩速度是一眾軟件中最少的。不過綜觀歷史,即時短訊軟件此起彼落似乎才是常態:ICQ、MSN不也曾紅極一時?
另一點值得留意的是,190億中有150億都是股票。有讀過企業財務的同學可能有學過,企業在自身估值高時會特別願意以股本作併購,因為每一股都代替了大量的現金資產。而以現時Facebook的110倍市盈率,相較Google的30倍以至Apple與Microsoft的13倍,肯定算是高了。如果大家相信Facebook估值過高,那它願意以同樣過高的價錢收購Whatsapp就不是那麼希奇了。
一如經濟日報以往不少的同類文章,標題十分誤導:一般人心目中的「儲」,是指低風險的儲蓄行為。文中的當事人主要是靠投資股市賺錢,最多只能叫「理財」有道。何況當事人已試過數次本金全失。雖然他現在投資穩重,但金融歷史告訴我們,壞投資在爆煲之前往往看似安全,我們又是否應鼓勵年輕人以這樣的方法「儲」錢?
又:光以投資而言,若當時人在兩三年前以他擁有的資金買下物業,現時他的財富應較可再多一倍。所以經濟日報其實亦可以用同樣的資料寫一篇「80後投資風險高回報低」,真係點寫都得。
Apparent Theft at Mt. Gox Shakes Bitcoin World
Economists have for a while been asking whether Bitcoin is a good store of value, for reasons ranging from demand to ease of entry. Now a bigger problem emerged: security. The Mt. Gox closedown is the biggest incident so far, but it is far from the first (see https://bitcointalk.org/index.php?topic=83794.0#post_toc_17).
Even without the transaction malleability issue, I see one of the biggest attractiveness of Bitcoin—access to your money unfettered by government control—instinctively as a two-edged sword. If you want no one to monitor your actual identity, then no one is monitoring whoever-took-your-money’s either. As such, I really do not see why people would trust their private keys with service providers, but storing them privately is just as safe as putting your money under your mattress either.
太多電線要走,開多個窿先夠。
A 10.8v drill has barely enough power to drill a 7cm hole through 1-inch particleboard tabletop, but now I can finally run cables any way I want.
信「升學專家」一成,小心前途去向不明。讀博士不能抱著香港人讀碩士的心態,光想拿多個學歷以利晉升。事實上除非在最頂尖的研究院畢業,否則博士學位只是個找工作的負累—學術單位看不上你,私人機構又嫌你學歷過高。為名銜而讀博士就更要不得,只適合已名成利就的社會達賢。
Do not ever consider getting a PhD for better career prospect. It will for sure not work out that way.
Preserving Constants in a Stata Collapse Operation
Let’s say you have a variable that you know is constant within each group, what is the best way to preserve it during a collapse operation in Stata? You might think taking the first value (firstnm) must be the fastest, since it theoretically only requires 1 step per group. If that is the case, you are in for a surprise—Stata is actually better in calculating the mean.
Here are the simulation results for 100 groups of 1000 randomly generated observations, averaged over 30 runs:
collapse mean 0.0443
collapse median 0.1062
collapse min 0.0844
collapse max 0.0657
collapse count 0.0456
collapse firstnm 0.0473
collapse lastnm 0.0464
The measurements are reported in seconds. The relative speed is quite stable to variations in number of groups and observations. Base on my analysis of the underlying algorthims collapse uses, the reason why firstnm is so slow is that an order-preserving sort has to be performed on the data, and order-preserving sorts are slow relative to non-preserving ones. To confirm this is true, I ran the test with just one group of 100k observations:
collapse mean 0.0614
collapse firstnm 0.0508
And as expected, firstnm is now faster. The calculation of mean also slows down more than that of firstnm as the number of groups decrease.
Base on my simulations, calculation of mean is faster when there are as little as 3 groups, so mean is the way to go in most cases.