Results 1 to 13 of 13

Thread: MTW based game

  1. #1

    Default

    Would anyone be interested in playing a Medieval Total War based alteration which would essentially be JUST the campaign game? Also the graphics would be simpler and some of the details not the same, however the primary difference would be an expanded and hopefully improved diplomatic game.

    The goal being to test out an experimental AI that could play strategic games such as MTW, Risk, CIV, and so on.

    The plan is to cure some common failings of strategy AIs, and also build a variable difficulty opponent that does not depend on 'cheats' (that is, the AI should play smarter, not simply get a +25% bonus to everything). Feel free to list some things you hate about the MTW AI in this thread as well

    The game will have all of the units and buildings of the MTW, non-expansion, game with the same properties. It also will have the character system with mostly similar mechanics (some of the heir statistics system will be modified). The world map will also remain the same, although only the Early version will be used for the experiment. The strategic agents will be modified, but will at least have the abilities present in MTW. Finally, interface will be expanded with some new screens relevant to the modifications.

    Things that will be removed from the game include battles, which will be auto-calculated, most of the cool graphics (there still will be figurines and the same click and drag interface) and small things such as cheat codes that are not really important.

    What I need to know, is if this game is built would there be a sufficient base of guinea pigs... er gameplaying enthusiasts to give it a decent grading of the system's strategy ability. Feel free to mention things that you would like improved, but please don't ask for trivial things like new units or buildings, as the goal is not to build a new MTW but rather to test a concept that may be used in the future in games like MTW. If for instance you notice a common AI issue in the strategic game, such as always ganging up on the human player whether it is in their interest or not, things like that which remind us all that it is a bunch of ones and zeros playing against us.

  2. #2
    Moderator Moderator Gregoshi's Avatar
    Join Date
    Oct 2000
    Location
    Central Pennsylvania, USA
    Posts
    12,980

    Default

    Greetings VividYoshee.

    There must be something in the water because your's is the second request for tester/players of an online game in development.

    How many players are you looking for? Is there software that will be controlling the game? Is there a website and/or forum where people can find more information?
    This space intentionally left blank

  3. #3

    Default

    I hope no one else is working on an experimental AI project

    This is more of an initial survey to see if the interest is there. I'm a big MTW fan, as well as strategy games in general, so I figured it would be a fun way to kill a few birds with one stone (MTW addiction, research, and game programming).

    If I get a decent response of people at least interested in learning more I will create a full web page and discussion forum outlining everything that will be preserved and what will be changed, as well as the basics of my AI idea. It would still be a couple months down the road before a full game would be released.

    It won't be eye-poppingly gorgeous, it is a technology mod rather than graphics or content. If I felt I could manage it I would simply hack it into MTW itself replacing the AI routines with my own, but I doubt CA is interested in handing out their source code and explaining it to a rookie like me :) .

    My target market is probably role-players, as my intention with the AI is to make it 'more human' and not necessarilly more difficult (though I believe the two go together well). Faction leader personalities (stats) will play more of a role, and they'll act in a mostly understandable manner. The AI will defend its borders and make attacks based on expected gain and loss, instead of just attacking whatever player is largest/weakest, or is currently at war with someone else. The AI will seek out stable positions better as well (I hope), Denmark will make sure it grabs Scandinavia, England will get unified, and so on because the AI will understand that it can create defensible borders with fewer troops in those situations. Also, the computer will play with some character, their will be kings that are benevolent doves seeking safety and trade, while others will be Dread Lords crushing all under their heel at the expense of their own people if need be. Strategic agents will be expanded, including the optional addition of treaties which are offers allowing you to give up something in return for something else (for instance, barter a province for the hand of a princess, which are much more valuable due to the breeding system). I'll also tweak the strategic agents, generally in the direction of making them stronger overall.

    I can go into more detail, and am eager to hear other opinions and ideas.

  4. #4
    Arrogant Ashigaru Moderator Ludens's Avatar
    Join Date
    Nov 2003
    Posts
    9,065
    Blog Entries
    1

    Default

    It sounds interesting, but how exactly are you going to make the AI more intelligent?

    I realize this may be impossible to answer to someone (like me) who doesn't have any experience with programming, but I hope you will be able to do it. Perhaps you could explain what you think the AI is doing wrong now.

    What I think the AI is doing wrong now is that it hasn't got overview over the situation. But I have no idea how you could program a computer to take that into account, without getting stuck in endless priority lists. You would need something that assign priorities automatically, like the human mind does.
    Looking for a good read? Visit the Library!

  5. #5

    Default

    You may regret you asked that... :)

    No, you can not create priority lists to make the AI smarter, my belief is that scripted artificial intelligence systems are not capable of taking games to the 'next-level'. People don't necessarilly want 'harder' games, as much as they want a game that will make them think. Strategy games do a very good job at this, even with modern AIs, but inevitably a sort of fatigue sets in as the computer becomes too predictable. There isn't a way to change this with scripts, they do not have the flexibility even if you insert all sorts of randomness or variables.

    You could argue there is no need to make intelligent opponents with the growing prevalence of multiplayer, but even in multiplayer settings you have annoyances and a lack of control. There also is a segment that likes to role-play (no not Everquest glorified chat room role-play) and that is difficult as other people tend to want to win more than to play a part in the game (such as a villain to be foiled).

    So, with all that said, there is a need for a computer simulated opponent that has the 'creativity' of a human, but is willing to be beat up on, play imperfectly (when it suits the purposes of playing a character), and respond to the player in a rationale, but generally unpredictable way. Also, scripting can't provide this solution, so some new decision making process needs to be implemented.

    Oddly enough, what you said:
    ---------
    What I think the AI is doing wrong now is that it hasn't got overview over the situation. But I have no idea how you could program a computer to take that into account, without getting stuck in endless priority lists. You would need something that assign priorities automatically, like the human mind does.
    ----------
    Is very important. The AI does currently have an overview, however, its vision is limited by the narrow constraints of its scripting. It usually can catch the major priorities, but it only knows how to weigh various alternatives by true/false questions, or at best some equation worked into the scripts. It can't see the big picture, or even create a new one, but is limited to the narrow frame that is set before it and inevitably falls into patterns the clever human mind easily dismantles in time.

    So how to make a computer at least assess the situation the same as a human is one challenge. I look at how human's make important decisions, and following the idea that human behaviour is essentially economic behaviour I try to reduce the game environment into an economy in which individual computer actors are taking part in. Games such as MTW can have essentially everything reduced into an economic resource. That of course isn't much of a leap, you could assign point values to all sorts of things in scripts. However, my major innovation is to apply some economics onto these resources to get these computer agents to maximize the profits in a way that takes account of all the many disparate needs of the agent (to defend the borders, to make money, and so on) both simultaneously and generally. It can balance tradeoffs against one another, evaluate long-term strategies and their value in the present time, and recognize optimal resource distributions while estimating a future optimal state and setting aside a necessary investment to reach it.

    All of that is more harder than it looks, and I also introduce variability into the agents in that all of them have a different values for the various resources. I don't even try to figure out the optimal strategy for the computer (like a scripter must), but rather set up the variables and rules and let various strategies emerge by creating different agent permutations. Combine this with a genetic algorithm (basically evolution for computerized DNA) and run trials to select the most interesting solutions.

    My belief is that such a paradigm could be applied to any number of problems, most notably to strategy games of course. I won't go into the programming/math babble, but the hope is that this is simultaneously faster and 'smarter' than scripted decision systems. The MTW-variant does use some scripts, but instead of making the decisions through a script, the decision is made in my new AI system and it uses a script to run through a particular task (for instance, it might decide it needs an alliance with the human and it'll run a script to direct one of its emissaries there). The point is to take the brains and put them in something very fast and flexible, and take the muscle (little todo lists to get certain things done like telling a province to build peasants, pathfinding, etc) and use scripts which are fine for that purpose.

    The speed of the system comes from the ability to look at many things very abstractly, very quickly, and in parallel.

    The intelligence of the system will first come from giving the AI an increased ability to understand what is a bad decision and what is a good decision (without human made rules that can be discovered and broken). The second stage will come from improving the agents solutions', keep picking the ones that do the best as those seem to have the best values for the various units and buildings in the game. The third stage which is not guaranteed to work, is that using those solutions the AI will be able to morph itself onto the human, and other AI, players and thereby figure out what it would do if it was in your position (and consequently, what it would need to do to defend from it). The final stage is creativity, which is where learning AI comes in, and isn't really my intent in this experiment. This AI does not learn from watching the human, it is pre-built, and though it does get better with time, it is because it tries out a different strategy to start with and tries to mix itself with other successful AIs. I do think Learning AI though could be given a boost by this framework though once I prove the basics.

    The experiment would be to give people the game and tell them to do their best and then rate their opponents based on difficulty, fun factor, and creativity. In a game like MTW, where you have what, 12-15 opponents per campaign, you can try out a lot of different agents all at once in one trial. The best get emailed back to me and are put together to create the next generation patch. Without learning or expansion the AI will eventually reach a maximum potential, but I'm hoping it will beat out the sort of experience someone gets playing against a traditional AI.

    The side effect is that some very different agents will show up due to the nature of the system. Some will be peaceful, some will be warlike, and just by observing them in play we could identify personalities and preserve them for future games. So if we find a very ruthless expansionist we could name them Genghis Khan and have them play as the Horde in the final version, we could evolve a Pope that actually makes sensible excommunication policies and doesn't depend on respawning with three stacks of templars, and so on. At the very least, the AI will not need a handicap, the difficulty levels are natural. Some strategies suck, so they will make good opponents for the Easy difficulty, while other strategies will end up being hard to beat and therefore go on the higher difficulty levels.

    Okay, that is enough nonsense out of me, I can go on for ages about the potential and reasoning behind my AI project.

  6. #6

    Default

    When I said 'no creativity' above, I meant it can not redefine itself through self-improvement. It will however come up with strategies that may seem creative to the human player, for instance, it might come up with an appeasement strategy and then when the player is not looking have built up an alliance of other countries to start a war with, or it might leave a border province undefended to draw in an enemy and then have a sneak attack somewhere else on the player with its real armies elsewhere. The goal is to get the computer to recognize these sort of things all by itself, a future experiment would be to have it actively refine its strategies to create these patterns with the premeditated expectation of these sneaky tactics.

  7. #7
    Member Member Obadiah's Avatar
    Join Date
    Feb 2004
    Location
    NYC, NY
    Posts
    104

    Default

    Viv,
    It sounds like a great project, both for the game-world, and also for Strategy fans. Congrat's on the initiative

    It seems to me, IMHO, that this is very similar to the problem of building a good chess program. They work, and can compete at high levels b'c of brute strength of gigantic iterations of calculations, rather than by discriminating thinking, evaluating or learning. As I understand the state of the technology (again, I'm far from a pro) IT lends itself easily to linear programing, but getting it to judge alternatives as your suggesting is a challenge that much of the industry is trying to wrestle with.

    I say go for it I'd gladly play any demo you create, although my # game hours/ week are pretty limited...

  8. #8

    Default

    Oh yes, it certainly is a major challenge area in the field of AI (not just the game industry, but pro and academia circles as well). Projects like Deep Blue to play chess are a good demonstration of what you can get by cleverly throwing lots of power all at once to a problem and compete with the best of human players, but it is also a demonstration that some radical invention will need to be made as the same technique will break down under complex environments, even ones as simple as MTW (compared to the real world).

    I do base some of my stuff on relatively recent ideas in decision making in computer vision and robotics which take huge chunks of numbers and transform them into concepts such as recognizing objects, or navigating robots (we are far far far from Johnny 5 alas). I think I've come up with some novel applications and refinements to apply similar techniques to economic systems, and also some gadgetry from computer science to better understand economics, but as I said I need some sort of demonstration that it works (even if only to prove it to myself) before I can develop it much further.


    Expect a website with information on the details of the gameplay later this week.

  9. #9
    Arrogant Ashigaru Moderator Ludens's Avatar
    Join Date
    Nov 2003
    Posts
    9,065
    Blog Entries
    1

    Default

    I see you have got it all worked out, and I was actually able to understand it. Off course, how you are actually going to program it is beyond me. It all seems pretty logical, but there are two point on which I have some doubts:

    A
    Quote Originally Posted by [b
    Quote[/b] ]The third stage which is not guaranteed to work, is that using those solutions the AI will be able to morph itself onto the human, and other AI, players and thereby figure out what it would do if it was in your position (and consequently, what it would need to do to defend from it).
    So you want to have the AI predict what others (the human player) are going to do. If I understand you correctly, you intend to do this by having the AI imagine what it would do the humans position. This is one way of predicting what a human is going to do, but it does not always work, especially not with computers. This is because computers do not gamble, but humans love it. If the AI is trying to predict a human's course of action by rationally evaluating his current position, the computer can be easily surprised by a bold human move.

    There is a second way in which humans predict the behavior of other humans. Instead of judging by the present, you can have the AI judge by the past. This is the way humans most commonly judge. If the AI can analyze the pattern of human behavior, he can see common denominators: aggressive or careful stance, emphasis on economy, political or military, preference for large, high tech armies or balanced armies, and so fort. Then the AI will be much better able to predict what a wily human is going to do.

    Off course, this kind of thing would not work with PBM campaigns.

    B: AI learning
    Quote Originally Posted by [b
    Quote[/b] ]I meant it can not redefine itself through self-improvement
    I have heard rumours that the TW AI is able to imitate the human player. For example, the AI shinobi rush in STW was not supposed to originate from the AI, but from the player. I do not know if these rumours are true, but this would be a form of what I term 'passive learning'.

    However, 'active learning' is something else. Trying things out, intentionally or unintentionally, is one of the main ways in which human (and animals) learn. And humans are good at is, because we instinctively see connections (causality) between events. Sometimes, we are right. Quite often, we are wrong (for example: superstitions are based upon supposed causality between to unrelated events). If you want to have an AI learn by actively trying out things, it must be able to 'see' a connection between events. However,

    1) This is going to be difficult, because the AI needs to determine whether some supposed connection is real or not. And I have no idea how humans do this.

    2) Humans are clever because they already have done a lot of trail-and-error. To make the AI reach a level at which can at least compete with humans, you need to give it 'start-information', else it will lag too far behind.

    3) This start information for the current AI is a pre-programmed (scripted) 'understanding' of the program. On this it bases its decisions, and the information on which these decisions are based is correct. If you are going to add other information for it to base its decisions upon, it might interfere with pre-programmed information.

    4) If you actually succeed in making a computer learn by trial-and-error, you are getting very close, if not actually at real artificial intelligence. This might be expecting to much


    Intelligence is passion of mine too. I like to discover the way humans think. But I don't think artificial intelligence interesting because it involves a lot of numbers. I am reasonably good at numbers, but I don't like 'em
    Looking for a good read? Visit the Library!

  10. #10

    Default

    Quote Originally Posted by [b
    Quote[/b] ]
    So you want to have the AI predict what others (the human player) are going to do. If I understand you correctly, you intend to do this by having the AI imagine what it would do the humans position. This is one way of predicting what a human is going to do, but it does not always work, especially not with computers. This is because computers do not gamble, but humans love it. If the AI is trying to predict a human's course of action by rationally evaluating his current position, the computer can be easily surprised by a bold human move.
    It is expected to work roughly the same as when a human does it. Assuming yourself rational, and your opponent rational, given your knowledge of his resources you will expect him to do certain things, and not do other things. We are dealing with partial information, and even humans can not GUARANTEE arising at the correct solution. My theory is if you knew the entire opposing position, and their personality (which in my system is defined as their valuations of resources, positions, and strategies), if you knew all that then you would correctly predict their action because any other would be sub-optimal (i.e. the human wouldn't try it because they don't think the gain is worth the risk). In practice humans can defy rationality, the hope is that they are ultimately punished by it as those gains don't appear (because they took too much risk). This is assuming the computer's self personality is a good match to the opponent, in most cases it will not be, however, it should be generally good enough to come up with a range the opponent will act within.

    Human's of course are creative and learning (which this system in this form is mostly not) so I still think they'll have the ultimate edge, but they'll have to really flex those muscles to beat this system because one strategy will not work against all AI opponents the same.

    This system does gamble, risk valuation is part of the economic model. It doesn't take stupid risks, just like you wouldn't bet a dollar to win a dime at only a 10% chance to win, it won't either. In a pure-fifty fifty shot depending on the personality of the AI it would probably take it, just like most people with an average sense of risk would take it (especially if the gain is more than double your money).

    So it is best to say that the AI is evaluating what it would and could do with what the human has, given what it knows. It doesn't use this method often (only when it is concerned about a particular agent's recent moves), but the hope is it is enough for it to come up with a defense before the strike. This is especially useful in determining whether the human player is going to backstab the AI, as is common in many games, us tricky humans like to lead the AI along until it is 'smashing time', and that is one thing I want to shut down tight (as a rational nation in medieval times wouldn't let even an ally build up a massive army on its border which could only obviously be meant to attack itself).

    ----

    The rest deals with learning, which of course is an important area, but not my focus. You have many learning systems, from genetic algorithms, knowledge bases, hierarchical discriminant regression, and so on. I apply some of those techniques to different things, but am not running any sort of learning algorithm here. My thinking is that you need to come up with some way of organizing the things you are trying to learn before any learning algorithm will become practical enough for real world use. I think my system provides the way to organize both the information and the learning process, so the hope is that will be a natural outgrowth of this experiment, however the experiment itself will not go to that point.

    Certain information will be relevant and there is a sort of memory of past actions, but it used only in evaluating strategies. For instance, if the AI does it's self-projection on the human opponent, and sees that it has been constructing a lot of units and sending them toward the AI border, it will recognize this strategy from its own and realize this signals an attack. So this gets some results from learning, without actually doing the work of learning.

    There are trial and error system out there in development, none are no where near practical for learning MTW at this stage. Even though MTW is a very simple environment compared to those being studied (which tells me there needs to be something more than what there is).

  11. #11
    Arrogant Ashigaru Moderator Ludens's Avatar
    Join Date
    Nov 2003
    Posts
    9,065
    Blog Entries
    1

    Default

    Quote Originally Posted by [b
    Quote[/b] (VividYoshee @ Mar. 17 2004,16:29)]It is expected to work roughly the same as when a human does it. Assuming yourself rational, and your opponent rational, given your knowledge of his resources you will expect him to do certain things, and not do other things.
    Humans usually do not predict the actions of another person by placing themselves in their position. This is simply to difficult since you seldom know everything the other person knows and usually he has different priorities from you. To predict what another person is going to do, humans look to the past. Sometime ago someone proposed that MTW diplomacy would make more sense if the AI actually did that, i.e. if it had some sort of diplomatic memory.

    However, this applies to normal personal contacts. When it comes to 'competitions', you may be right.

    Quote Originally Posted by [b
    Quote[/b] ]In practice humans can defy rationality, the hope is that they are ultimately punished by it as those gains don't appear (because they took too much risk).
    In real life, this is true. However, when playing you can get round this by saving and reloading if you don't like the result.

    As far as AI learning goes, you have lost me.
    I think that you are saying that the AI should recognize its own tactics in the human player's moves.
    Looking for a good read? Visit the Library!

  12. #12

    Default

    You don't know everything the other person does, however, that is a lack of past information collection as well. A historical model will do no better to predict the person, simply because the AI at the start of the game doesn't have that information (it might not even see the player's borders till mid-game if it isn't expansionary). I would argue that humans do predict another person's actions by placing themselves, or models they have built, upon that person and observing the situation (past and present) to deduce both the options they will have available, and the ones they would prefer.

    Learning by simple observation can be very slow, especially when you have environments and actors that don't lend themselves to simple classification. Yes, trial and error on past events can build patterns, but it is simply not fast enough to be a viable solution. Furthermore, those patterns are very dependant on the training data, and when dealing with wily humans, who love breaking past patterns to take risks, the historical pattern will break down and need to be further refined.

    Self-projection, or model-projection (building a profile of how another player's priorities which would involve learning), has the potential to be much faster, no training time (on opponents), and I think a better predictor of opponent actions. You look at the past to try and figure out your opponents priorities and their present status, which is good. However, you do not offer a suggestion on how to USE that information. My current model doesn't use that information, but I believe it can be improved by the addition of that info. However I think self, rather than model, projection to predict will be of sufficient quality to improve the quality of the AI by itself, and that model creation (using learning) with the same system will be even more powerful when the technology progresses that far.

    Besides, I do use diplomatic memory (it is crucial to my system) but it is not bulk-collected and then used to determine a pattern. The patterns are already in the AI (a single pattern corresponding to its own personality) and it applies that to the data. It assumes that pattern is correct, and if it isn't the AI gets defeated and gets replaced with a different pattern (evolution part of experiment).

    I really think it should be stressed that we are probably looking at two different parts of the issue here. Both of these are two parts that could be put into a system together, but one (my projections used to run through the system) is concerned with actually combining data (regardless of its source) to come to a conclusion, and the other is the collection of that information (historical data). Even if you have lots of historical data, how do you sift out the useful stuff? That is the question AI has been dealing with for years, and none of it has yielded the sufficient answer yet.

    ---

    Yes, this is rather focused towards competition and cooperative environments. Don't want this to turn into a discussion of psychology (though I probably could). Its closer to game theory/economics than anything else, with some stuff stolen from the people in cognitive science.

    ---

    Players are free to save-reload all they want. The idea is to make an interesting AI, not make humans interested in playing the game fairly. Such a player would no doubt be more interested in MTW with the better graphics, actual battles, and so on. What would they want with a clever AI if they reset whenever it does something clever or punishes them for doing something foolish?

    ----

    My ranting here hasn't exactly been very coherent so I don't doubt it is hard to understand. I'm going into much more detail than I intended to on this forum, and hopefully the more carefully constructed explanations on the website will be more useful.

  13. #13
    Arrogant Ashigaru Moderator Ludens's Avatar
    Join Date
    Nov 2003
    Posts
    9,065
    Blog Entries
    1

    Default

    Quote Originally Posted by [b
    Quote[/b] (VividYoshee @ Mar. 17 2004,20:15)]My ranting here hasn't exactly been very coherent so I don't doubt it is hard to understand.
    It was all understandable to me, except for the part about computer learning, so it isn't that bad.

    Quote Originally Posted by [b
    Quote[/b] ]Players are free to save-reload all they want.
    I justed wanted to point out that human players are more willing to take risks, because they can back-up their situation.

    As for the AI predicting human moves: you asked for ideas, so I gave you one. If isn't workable, then that is fine with me. I am certainly no expert in these matters.

    Lastly, where can I find your website?
    Looking for a good read? Visit the Library!

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Single Sign On provided by vBSSO