謝謝大師也是有學習功能,是不是同樣就不得而知。圍棋無法像象棋一樣用alpha-beta,
象棋自然也沒辦法用alphago那套
There is an important difference between chess and go, at least from a programmer's perspective. Chess is more of a tactical game, whereas go is more of a strategic game. This means that in chess calculation depth trumps positional evaluation. That's basically the key insight that distinguishes the "old" engines like Fritz, Shredder, Junior and the newer generation like Fruit, Rybka, Houdini, Stockfish, Komodo. Because at the end of each line you have to evaluate the position and you want to calculate lot's of lines and the quality of the evaluation isn't as important as search depth, chess engines have lean and fast evaluation functions.
In go on the other hand the tactical complexity is too big even for computers. Consequently evaluating positions and moves accurately is key. What Alphago brings new to the game is this evaluation power, which is based on convolutional neural networks.
To finally get to my point: Whereas chess evaluation functions are lean and fast, neural networks have millions, sometimes billions of parameters. Because "learning" in this context means tweaking parameters, there is much more possible progress for self learning go programs.
So, yes you could use a setup like Alphago to create a chess engine, but it wouldn't be particularly good. Running the evaluation function would take so much time, that you'd have to utilise a huge cluster of gpus to get to necessary search depths (which is what Alphago does). You could create a very good evaluation function, but the speed tradeoff isn't worth it.