Yeah, the AI lacks any reasonable foresight most of the time. I'm not sure what you can do about it, I know the 'conditional statement' response seems like an easy solution but CA talked touted a goal based AI...
I suspect that if indeed CA actually implemented something like, the problem is structural. The issue would be that the AI doesn't analyse a set of options after each iteration, instead it just picks the one.
For those not familiar a basic goal based AI can be described as follows.
-You have a set of X choices that can be performed at each step with a reward function(for strategic thinking, I think a long term and short term function would be best).
-During each step, the AI selects the option in which after evaluating the reward function results in the best reward and adds that action to its todo list.
-It steps through Y amount of times and then starts to execute the to do list.
The problem with this though is that it results in a very miopic AI in that it will invariably pick the list that results in the most immediate benefits.
The simpliest way of fixing that is to take Z sized set of the best options at every step so instead of a list, you generate a tree. After Y steps, you select the single path in the tree that results in the highest reward score. So once you adjust the AI to look at a set of decisions, it gets more intellegent. This is why Chess AIs take forever. They are analyzing all possible good moves.
Now, I can't really tell if this is actually the problem but it seems like this might be part of it. It could also be the function that picks the set of actions for picking 'attack' every time or the reward function for each decision is seriously screwed. I dunno.
Bookmarks