如何计算Go-lang中的日语单词

Walking through the Go-Tour it gives nice impression that Unicode is supported out-of-the-box.

Counting words that don't use standard separators like spaces specially in Japanese and Chinese have been painful in other programming languages (php), so, curious to know if it is possible to count the words written in Japanese language (eg: katakana) using Go-programming language.

If yes, how ?

The answer is Yes. It is "possible to count the words written in Japanese language (eg: katakana) using Go-programming language." But first you need to improve your question.

Someone reading your phrase, "standard separators like spaces", might believe that word counting is a well-defined operation. It is not, even for languages like English. In the phrase, "testing 1 2 3 testing", does the string "1 2 3" represent one word, or three, or zero? Is the answer different for "testing 123 testing"? How many words are in the phrase, "testing <mytag class="numbers">1 2 3</mytag> testing"?

Someone might also believe the Japanese language has a concept of "words", analogous to English, but with a different syntactical convention. That is not correct -- for many languages, like Japanese, written Chinese, and Thai.

So, you must first improve your question by defining what "words" are, in Latin-script text, for languages like English.

Do you want a simple lexical definition, based on presence of spacing characters? Then consider using Unicode TR 29 Version 4.1.0 - Text Boundaries, Section 4 Word Boundaries. This defines "word boundaries" in terms of regular expressions and Unicode character properties. The localisation industry standard GMX-V, Word Boundaries section, uses TR 29.

Once you have your definition, I'm confident you'd be able to implement it using Go packages like unicode and text/scanner. I haven't done this myself. From a quick look at the official packages list, it looks like the existing packages don't have a TR 29 implementation. But your question asks if it is "possible", not "already implemented by an official package".

Next, for Japanese: do you want a simple lexical definition of "word"? If so, Unicode TR 29 supplies it. They say,

For Thai, Lao, Khmer, Myanmar, and other scripts that do not typically use spaces between words, a good implementation should not depend on the default word boundary specification. It should use a more sophisticated mechanism, as is also required for line breaking. Ideographic scripts such as Japanese and Chinese are even more complex. Where Hangul text is written without spaces, the same applies. However, in the absence of a more sophisticated mechanism, the rules specified in this annex supply a well-defined default.

If you want a linguistically sophisticated definition of "word" in the Japanese context, then you need to start considering the issues raised by @Jhilke Dai, Sergio Tulentsev, and the other contributors. You will need to design your specification of "word". Then you will need to implement it. I'm confident you will not find such an implementation in an official Go package as of July 2014. However, I'm also confident that if you can design a clear specification, it is "possible" to implement it in Go.

Now: how many words are there in this reply? How did you count them?