When people say that something works “works like a machine” the implication is that this work is not only ceaseless but flawless. In particular this seems to appply to machine intelligence, intelligent robots and computers are, not only in fiction, assumed to have all information—and correct information only—available and then flawlessly proceed to the correct conclusion. Certainly often evil conclusions, but still the only possible conclusion.
Well, of course real software doesn't work that way. I have not worked with AI per se, but any interactive systems should be as “intelligent” as possible, where this in practice means that one studies users and figures out what they want done most of the time and then try to make the interface anticipate what the user wants in every given situation. In many cases this turns out not to be what the user wanted and bad user interfaces tend to do their anticipation in such a way as to annoy the user. Good user interfaces on the other hand are unobtrusive and smoothly let the user continue with whatever was actually intended, silently withdrawing whatever suggestion might have been proposed.
Machine translation has always been an important task for AI, and it seems the applications I have tried go for the ideal of the all-knowing computer. Thus if you submit a text for translation, you get the output all at once, unalterable, regardless of how bizarre it ends up. Shouldn't it be possible, in this day and age, to have an interactive translation application which presents alternative interpretations of the input and lets the user guide the translation? Certainly, even when I as a person translate text I end up having to make notes in the output, stating that a particular interpretation is dependent on a previous term having meant this and not that.