AI Archives - GameAnalytics https://gameanalytics.com/resources/tags/ai/ Thu, 14 Mar 2024 14:11:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 AI-Driven Creativity: Prototyping Games in the Digital Age https://gameanalytics.com/blog/ai-driven-game-development/ Tue, 26 Dec 2023 12:01:43 +0000 https://gameanalytics.com/?p=21115 AI and game development

While you can’t (yet) plug in a prompt and have it develop a fully functional mobile game, you can use AI to help you come up with ideas and speed up your process. So here are a few ways you can use AI when developing your prototypes.]]>
AI and game development

AI is definitely going to change how we work, play and live. But right now, it’s not exactly great at being original. It tends to churn out a lot of generic advice and content. But, while you can’t (yet) plug in a prompt and have it develop a fully functional mobile game, you can use AI to help you come up with ideas and speed up your process. Here are a few ways you can use AI when developing your prototypes.

How should you use AI?

Regardless of whether you’re playing around with Midjourney, ChatGPT or any other AI, there are a few rules to getting the most out of it.

Make your prompts specific

The more detailed and specific you are in your initial prompt, the more useful the response will be. If you’re generic, you’re going to get generic responses. Asking it to come up with a “unique mobile game concept” isn’t going to get you far. Instead, make sure you give it as much information as possible. Write in your prompt like you’re describing a brief. The more you put in, the more helpful the response will be.

Don’t stop at the first response

Using AI is all about refining your prompt and becoming more and more specific until you get to a response that works. When testing the AI, we often needed to add caveats or get it to perform tasks one by one if we wanted to get the best result.

Use multiple AIs

We’ve found that it’s best to give multiple AIs the same prompt. Ask Bard, Bing and ChatGPT the same question, and you’ll get much more varied responses. So mix and match between AI if you want some variety. It can also smooth out some issues we’ve found with certain prompts. What works with one AI might get completely different results with another.

What can you use AI for?

It’s best to use AI when you’re looking for a very specific output that would take a team ages to do themselves. If you try to use it to come up with original ideas or themes, you’ll find that your prompts are too open-ended. Those are best left to real humans. For example, if you just ask it to come up with themes for your mobile game, it’ll likely rehash ideas that are already popular – basically telling you to create games already in the top charts. Not particularly useful. But specific tasks – that’s where AI shines.

1. Brainstorm your concepts

This is an area where AI can excel. Coming up with thousands of ideas in mere seconds. With the right prompting, you can get it to create a huge list of concepts to add to your own ideas. Even so, most of those will be duds (much like in any brainstorm). But that’s fine. You’re just using them for inspiration.

As for the prompt, it’s best to ask it for ideas using a specific mechanic or with a specific theme. For example, “show me a long list of themes for a hyper-casual game that uses swipe mechanics.”

2. Make snippets of lore for your items

If you have thousands of items, it can take up a lot of time writing a paragraph of text for each one. With AI, you can generate these snippets of lore almost instantly.

Bard example for game dev 1

We asked Bard to create lore snippets for various magical items in a game.

The responses you get won’t be perfect, but they give you a starting point. Edit them and make changes to fit your specific needs, and you’ve saved yourself a ton of time. Similarly, you could use AI to write the backstory for locations, bosses, levels, or even power-ups.

3. Write short descriptions for multiple items

While lore can add flavour to your game, you’ll often find you need to have a few short sentences as hover-over text for every item in your game. Give the AI a list of items you need to describe, and you can speed up that process.

Bard example for game dev 2

AI can easily put together short descriptions for multiple items at once.

4. Create characters in just a few prompts

By building on multiple prompts, you can develop a whole host of characters to populate your game. For example, you could start by getting the AI to brainstorm a list of twenty Japanese names suitable for an archer. Once you have your name – we’ve chosen Yumi (meaning “bow”) – you can ask for a more detailed description.

Bard example for game dev 3

With a name in hand, we know have an entire character bible for Yumi the archer.

By telling Bard to use specific headings, we can generate multiple snippets of information that will be vital in making Yumi a rounded character.

5. Write dialogue to sprinkle into your game

There are numerous situations where you might need a short piece of dialogue from your characters, whether that’s when they level up or when they first enter the dungeon. By telling an AI about the character and listing the various situations, you can get it to produce all these snippets at once.

Bard example for game dev 4

Giving AI a list of headings is a useful way to get multiple results at once.

If you need more, tell the AI exactly how many snippets of dialogue you want. Or maybe you include multiple characters and see how they interact with each other.

6. Help refine your mechanics

If you know what type of game you’re creating, you can get the AI to help with specific tasks. Maybe you need a puzzle for a dungeon or a list of items a shop might sell. For example, imagine you’re making a crafting game. You can feed in your resources and have the AI come up with a list of recipes.

Bard example for game dev 5

Kickstart the design process by getting AI to come up with some baseline crafting recipes.

7. Refine the writing you already have

It isn’t just generating text that you can use AI to help you with. As we’ve mentioned, AI can be rather generic if you’re too open-ended. So if you want truly original thinking – develop the lore yourself and then get the AI to refine it.

Bard example for game dev 6

We tell Google Bard to rewrite our description of Yuttgard.

From our – quite bad – description of Yuttgard, Bard has produced something much more enticing. It’s not perfect, but it’s got way more flare than our original and would be ideal if we’re only trying to put together a prototype.

8. Write marketing materials

Scripts for videos. Headlines for banner ads. App store descriptions. These all need words that you might not have time to create yourself. Just remember to give as much information in your prompt as you can.

Bard example for game dev 7

Even if we don’t use the exact wording, the AI can give us a good starting point for our App Store description.

Use analytics to track your success

Once you’ve made your prototype, you’ll probably want to run some A/B tests to see what’s working with your players, and whether your idea is as rad as it sounds. In which case, try out our A/B testing tool and get all the data you’ll need.

]]>
Roamer Games: Powering Game Development by Combining GameAnalytics and AI https://gameanalytics.com/case-studies/roamer-games/ Mon, 06 Nov 2023 08:56:23 +0000 https://gameanalytics.com/?p=21624

Roamer Games is the new kid on the game development block. They are all about creating a mid-core strategy game that would give you the thrill of Civilization in just a five-minute gaming session. Early in the production, their team figured that to create a hit title, they needed to understand players’ behavior on a granular level. That’s where GameAnalytics’ data tools came into the picture. This case study breaks down how Roamer Games used GameAnalytics combined with AI technology to level up their game development. Understanding Roamer Games’ Needs and Challenges The game Roamer Games had in mind was a cross-platform but mobile-first marvel. Think iOS, Android, and WebGL. It’s a blend of Civilization and Clash Royale – a strategy with a dash of action. The studio needed the lowdown on how gamers played their game right from the...]]>

Roamer Games is the new kid on the game development block. They are all about creating a mid-core strategy game that would give you the thrill of Civilization in just a five-minute gaming session. Early in the production, their team figured that to create a hit title, they needed to understand players’ behavior on a granular level. That’s where GameAnalytics’ data tools came into the picture. This case study breaks down how Roamer Games used GameAnalytics combined with AI technology to level up their game development.

Understanding Roamer Games’ Needs and Challenges

The game Roamer Games had in mind was a cross-platform but mobile-first marvel. Think iOS, Android, and WebGL. It’s a blend of Civilization and Clash Royale – a strategy with a dash of action.

The studio needed the lowdown on how gamers played their game right from the early stages of development. They wanted to know everything: retention, how long gamers played, how often, where they dropped out, and even what they thought of the first-time user experience. And later, they became set on understanding how players who used Real-time PVP feature compared to those who skipped it.

Roamer Games uses the Unity engine for developing their game. And before integrating GameAnalytics, they also used Unity’s analytics. While it was initially sufficient, they quickly realized they needed more. After reviewing and comparing different analytic tools, they decided to run with GameAnalytics. David Smit, CPO of the studio, comments:

WebGL was a major driver for our choice of analytics provider. That and the ease of setup early in the product development process made it a clear choice to go for GameAnalytics. Their data warehouse offered us a platform to go deep and granular with the data, while the dashboard gave us quick insight into the most important daily metrics.

Lastly, he appreciated the quick integration:

Setup was surprisingly easy. Install the SDK, create a game on the Dashboard, and link the two. Now you are good to go.

Combining GameAnalytics and Artificial Intelligence

The dashboard was the go-to for Roamer Games. It gave them the initial insights they needed every day. They even saved common queries to track player progress, where players dropped off, and Cohort data to understand how players hung around.

The data export options from GameAnalytics allowed Roamer Games to go even deeper with their data, unlocking player and event-level insights. The data tools that GameAnalytics offer are incredibly powerful. We now have access to an enormous amount of data through BigQuery. It requires some good knowledge of SQL, but it’s the way to go if you want to dig deep into the product and what happens to these users. Once you understand the different databases and workflow, you can get incredibly granular and hone in on any specific point in the game.

Since the studio does not have a dedicated in-house data scientist, the team didn’t shy away from using AI – specifically ChatGPT – to form and optimize complex queries. They simply specified the data sets and tables available and asked ChatGPT to write the queries. The team was aware of occasional hallucinations and inconsistencies, and when they found some, they asked for a revision, and the chat would fix things for them.

Here is an example of a prompt Roamer Games used to build a query:

This is Table X about the player state: *paste the structure of that data set and table*.
This is Table Y about the daily checkpoints: *paste the structure of that data set and table*.
Give me a query that returns the following information, split by build, of the last six months.

This is a query generated by ChatGPT:

ChatGPT query with an error
A query with an error on line 35 – generated by ChatGPT.

However, BigQuery returned an error as it tried to grab a field ‘Build‘ that does not exist in the design_event table (highlighted in red). On top of that, the warehouse attempted to split by build while already filtering by a specific build, leading to a meaningless split.

The team simply prompted ChatGPT with the error: “error: Unrecognized name: build at [5:7]”, and AI provided an improved query. Now, Roamer Games received a meaningful query:

ChatGPT generated query with improvement
Improved query generated by ChatGPT.

Optimizing the Game with Key Findings

Here is what Roamer Games learned about its game after integrating with GameAnalytics. The studio discovered that players who dived into PVP had a 50% better Day 1 retention. This allowed Roamer Games to understand the value of the feature in their product very early.

Plus, they figured that most players picked to play as Vikings. This led the studio to prioritize not only the starter pack but early unit creation in general. The team made sure that they focused on creating a whole set of crowd-pleasing units from the Viking era first.

Last but not least, they also noticed gamers were spending more time playing the game:

Roamer Games retention chart
Source: Roamer Games

We measured progression between levels a lot. The number of core games someone played was the main indicator of their progression. We found that after level 3 or 5, players dropped out in large numbers. Thanks to this insight, we could easily pinpoint the issue and quickly improve the balancing of these levels.

Discover, improve, optimize

Roamer Games leveraged GameAnalytics to delve into player behavior and optimize game features. Despite not appointing a dedicated data scientist, the studio was able to translate data from Raw Export to meaningful queries using artificial intelligence and further gain valuable insights via the BigQuery warehouse. As a result, the studio improved the game’s starter pack and other units by prioritizing Viking characters, enhanced level progression, and created data-driven, player-centric gaming experiences.

GameAnalytics can help you squash bugs and make data-driven decisions about your own titles rather than guessing in the dark. Check out our SDKs and start using our free tool today.

]]>
Generative 3D creation with AI prompts https://gameanalytics.com/blog/generative-3d-creation-with-ai-prompts/ Wed, 06 Sep 2023 12:30:42 +0000 https://gameanalytics.com/?p=21461 Generative AI cover

AI is evolving fast. From image generation to artificial assistants, we're seeing more uses for AI hit the market. 3D modeling included. Our friends at Sloyd are launching AI text prompting in their 3D web editor. They've shared all of the details for their latest release. ]]>
Generative AI cover

So, Sloyd is a 3D automation tool for instantly generating game-ready assets, and we have just gone live with AI prompting. We’ll tell you all about it, but first, let’s take a step back and look at the state of AI in gaming and specifically in modeling.

How AI is used in game development and in 3D modeling

I bet you’re already using AI to some degree in your game development workflow. According to a survey conducted by A16Z Games, 87% of studios are using an AI tool or model as of today. Yet, the use right now is mostly on the periphery, as the vast majority of studios are using horizontal, non-game-specific tools (ChatGPT/GPT-3/4; Midjourney; Copilot; etc).

To this point, AI has been adopted to varying degrees for concept art ideation, coding for games, creating textures, NPC scripts, sound effects, and motion capture. The area with the least adoption is 3D modeling. It’s no wonder; when it comes to 3D models for games, we have high standards. A game asset not only needs to look good but also needs to perform.

The key to a game-ready asset is good topology. The term “topology” refers to the distribution and structure of vertices, edges, and faces of a 3D model. The topology shows how well vertices are organized (cgiscience). A model with bad topology will be created with many more polygons than necessary to serve its artistic purpose. It would simply be needlessly heavy, limiting the amount of art you render in a single frame.

More often than not, when taking assets from another source (a marketplace, an older studio game, or a generated model), we need to make modifications and adjustments. Working with an asset with bad topology will also be costly. It will be much harder to work with the UVs and make changes to the textures. It would also be difficult to modify parts of the mesh and even harder to animate without stretching or shrinking parts of the asset.

Looking at the outputs of the best-in-class GenAI research, Get3D, and Dreamfusion, the topology is very far from optimal. They don’t look great, they gobble polygons, and are extremely painful to work with.

Book 3D model image
Book 3D model taken from Get3D inspected in Blender. (Left – UV, right – vertices)

A third factor in using generative 3D models is the creation time. A result can take an hour. One might argue that an hour is a lot less than creating a model from scratch. But often you need several iterations before you get a satisfactory result, and if each iteration takes an hour, the overall tradeoff of time versus quality is a lot less compelling.

Crab 3D Model

Crab 3D model taken from Dreamfusion inspected in Blender. (Left – UV, right – vertices)

A parametric approach is the key to instant, game ready assets

Sloyd adopts a ‘Lego pieces’ approach to 3D automation. It employs parametric 3D geometries as its base building blocks and assembles them into 3D assets. On the one hand, this approach might appear limiting as the system can only create from the building blocks it has been ‘fed’. However, it carries a significant advantage: the results are guaranteed to possess clean topology, easily manageable UVs, separable parts, and optimization for performance. The Sloyd team is actively expanding its library of building blocks, aiming to enable the creation of a wider range of assets. While Sloyd doesn’t cover everything yet, it ensures that every model you create is game ready.

Lamp 3D model

Lamp 3D model taken from Sloyd inspected in Blender. (Left – UV, right – vertices)

Adding to its capabilities, it’s incredibly fast! Input your text, and you’ll receive a result in mere seconds. Following that, you can make adjustments to items using simple inputs like sliders, and the changes occur in real-time. Currently, each text prompt yields a new object. So, if you’re dissatisfied with the outcome, refining your text will yield a new model. However, in the near future, you’ll be able to iterate on a model using text descriptions, and all of this will be in real-time.

AI prompting of models, step-by-step

Signing up for Sloyd is free and you’ll get to creating in a few seconds. There are a few ways to start creating with Sloyd, and the newest way is AI prompt. Just describe what you want to create in simple words. Include the object’s name and one or two adjectives. To achieve optimal results, focus on props, weapons, furniture and buildings. Try something like: “Well with an oriental roof”, “spaceship with X-wings” or “big flying saucer”.

AI Prompt

Sloyd, step 1: AI prompt

A model will spawn on the canvas and a new side menu with sliders, toggles and buttons, will open.

  • Buttons: At the top, you’ll find selectable parts that change areas of the model.
  • Sliders: Adjust standard parameters such as height, width, and curvature using sliders. These parameters allow you to manipulate the shape of your model.
  • Toggle Parts: Toggle different parts of the model on or off to create unique variations. Experiment with different combinations to achieve the desired look.
  • Advanced Options: Delve into the details to access more advanced options for fine-tuning your model’s appearance. You might encounter sliders that drastically alter specific features or parts of the model.

Iterations and sliders

Sloyd, step 2: iteration with sliders and buttons

One of the standout features of Sloyd is the Randomizer. This tool provides an excellent starting point for your design. Click the Randomizer button multiple times to generate various design iterations.

  • Explore unexpected and creative design possibilities that you may not have considered.
  • Find a starting point that resonates with you. It could be a unique design element or an intriguing shape.
Randomizer
Sloyd, using randomizer to expose unexpected variations

Once you’ve crafted your ideal 3D model, it’s time to add the finishing touches:

  • Materials and Colors: Navigate to the Materials section to experiment with different colors and textures. Transform your model’s appearance by selecting vibrant, bold colors or more subdued tones.
  • Preview Your Changes: As you make adjustments, the outline of your model will update in real-time. This feature allows you to visualize the changes you’re making and make informed decisions.
  • Export Your Creation: When you’re satisfied with your masterpiece, select the object and click the Export button. Choose the desired format for your export, such as GLB or OBJ.
Exporting example
Sloyd, step 3: exporting

What’s coming up in Sloyd in terms of AI and 3D modeling

While our AI prompting is still in the experimental stage, it’s continually improving. Yet, this is just the initial phase. The upcoming stage involves introducing the capability to iterate on a model using text. Subsequently, our plan encompasses enabling the AI to comprehend color and material changes, as well as generating and integrating AI textures. Our big leap will be AI spatial assembly of 3D models. Once accomplished, the system will autonomously select ‘Lego pieces’ from a virtual ‘warehouse’ and meld them to craft a 3D model without external guidance. This, coupled with an extensive and perpetually expanding ‘warehouse,’ will render the potential to create pretty much everything. As we extend this capability from crafting singular models to multiple models simultaneously, we’ll have AI world creation from the ground up.

Sloyd’s AI roadmap
A visual description of Sloyd’s AI roadmap
]]>
Using AI to Supercharge Your Game Art Design https://gameanalytics.com/blog/using-ai-to-supercharge-your-game-art-design/ Wed, 02 Aug 2023 15:49:45 +0000 https://gameanalytics.com/?p=21306 Midjourney Cover Image

Discover how tweaking AI tool settings can help you generate varied art styles, produce better concepts, and speed up the process from prototype to final design. With AI on your team, creating unique game art has never been easier or faster.]]>
Midjourney Cover Image

We’ve previously delved into how AI tools such as ChatGPT, Bing, and Bard can speed up your development process. These tools are highly efficient for creating large amounts of text at once – from lore snippets to item descriptions.

Another powerful application of AI is in the realm of art. We’ve explored how you could use PicFinder to help generate AI art for your game. But AI generators aren’t just useful for character concepts. They can create a wide range of assets that you’ll need. As we mentioned in our previous posts: the results won’t be perfect, but AI can certainly help you develop a workable prototype faster. You can then pass these AI-generated creations to your actual art team for further refinement, ensuring stylistic consistency.

Understanding AI commands

Each AI tool operates differently and has distinct parameters you can tweak – especially in the realm of art.

MidJourney has a whole array of parameters

There are two ways to change your settings in MidJourney. The first is to change your default settings using “/settings”. But these only touch the surface of what you can do.

You can explore the full list of commands on their website, but there are a few that are particularly useful to know. Put these commands at the end of your prompt. They all begin with a double dash, followed by the parameter of your choice.

  • –chaos <0-100> The more chaos you decide to have, the more random elements MidJourney will add. This is useful if you want each of the four images to be very different from one another.
  • –no <terms> MidJourney will try to remove your terms from its results. For example, if it keeps adding lakes to your landscapes and you want it to avoid that.
  • –v <number> or –niji MidJourney regularly updates the version. But if you’ve used a prompt using an earlier engine and liked the style, you can always specify which version to use. You can also use niji, which is their model specifically for an anime style.

PicFinder prioritizes the front of your prompt

If you’re using PicFinder, you can change the settings next to the prompt bar. It’s worth experimenting with the aspect ratio, as this can lead to quite different results. For example, square images are more likely to look like portraits or avatars. The most important part of PicFinder is that it prioritizes the first words you type in. So make sure you put the key information at the front.

PicFinder Example

They also have four models you can choose from:

  • MindCanvas. This is their default setting, and it’s the most varied option you can use. It’s good for anime, cartoon and fantasy styles, but covers most bases that you’ll need.
  • ReV Animated. Use this one when you’re making a character and want a good front-facing portrait. If you’re looking for character inspiration, this is the model to use.
  • AbsoluteReality. This is the model to use when you want something more photorealistic. It’s particularly good at environments.
  • Samaritan 3d Cartoon. This is for really cartoony characters and objects.

Whatever tool you’re using, try out different settings, models and parameters to see how it affects your image. Once you find what you like, for example, a specific art style to use, make sure you copy and paste that style into each prompt at the end (or make a note of the settings you used). That’ll help make sure you get more consistent results.

Inspiration for art styles

It can be a bit daunting when you first start using an AI generator. What exactly should you add? Thankfully, Andrei Kovalev has created MidLibrary, which can let you see pretty much every style possible. Even if you’re not using MidJourney, this can be a useful site to help you refine your prompt. Similarly, you can use a site like Stable Diffusion Art to find styles.

Give your artists a better brief

When coming up with characters, it can be difficult to describe what you mean. This is where an AI generator can help you give a better brief to your artists. You see, a generator can’t create a consistent character – every time you use it, it’s going to start from scratch. But it can help you show your actual artists what works for you. So type in your prompt, pick out the results that appeal to you, and use them to form your actual brief.

Character Design PicFinder

When using PicFinder, we typed in these prompts to get a design for our character:

  • female character who is a vampire hunter
  • female character who is a vampire hunter pixel artstyle
  • female character who is a vampire hunter cute artstyle
  • female character who is a vampire hunter studio ghibli style.

You can even use it to rule out certain routes. For example, we really don’t like the top-right image that it generated. So we’d let our art team know what to avoid.

Create maps to help your world building

Struggling to start creating your world? Maybe you’ve got a rough idea of the layout, but don’t know how it’ll all look when it comes together. Well, AI can generate you a map.

Maps example MidJourney

A map of a fantasy world, include three continents, parchment –v 5 (MidJourney)

Admittedly, it seems to have struggled with the number of continents. But it’s a start. We could easily use this map as a starting point for creating lore or places of interest for our characters.

And if you need it in your game’s style – you could start with a more specific prompt. Just by changing the style, we can get vastly different results.

3D Map Example MidJourney

A map of a fantasy world, include three continents, 3d isometric, –v 5 (MidJourney)

Generate textures for your tile maps

It can be painstaking to get textures for different surfaces – especially if you want a specific style. But AI can generate these for you in a matter of moments.

Mobile wall background PicFinder

Cartoon stone texture wallpaper (PicFinder)

If you’re making a 2D game, you could even get it to generate an entire tileset, letting you build new rooms with relative ease.

MidJourney Tileset

A tileset for the floor in a mobile game –v 5 (MidJourney)

Create assets and icons

If you’ve got a thousand different items in your game, it can take months to draw each and every icon by hand. You could licence a library from somewhere, but then every game starts to look the same.

Instead, you can describe the items to an AI and get them in a style that fits with your game. Even if you’re only using these as placeholders in your prototype, while your artists work on the real deal – it’s going to be far better than a generic icon.

Mobile Icon example PicFinder

Flaming sword, item icon for a mobile game (PicFinder)

Remember, with some tools you can use a specific image as an initial prompt. (For example, we took a photo of a random hat we had lying around and asked MidJourney to turn it into an icon.) Maybe you do this when you want a specific look or maybe if you want to create variations of the App Store icon.

Hat icon MidJourney

A random hat we had lying around as an item icon for a mobile game (MidJourney)

Remember: If you don’t like the options that it creates, you can always try again or ask it for more chaos.

Add flavour to lore entries

In our previous AI article, we got Google Bard to refine a description of a floating city – Yuttgard – for us. We can just imagine that the player could unlock the lore and check it out in some sort of in-game codex.

But having just text might be a tad dull. Maybe we want to spice up the entry and show the players what Yuttgard looks like. Well, AI could help there, too. Type in your lore and see what the AI comes up with. It might be abstract, or it might show the location you’re describing.

MidJourney landscape example

We put the full description from Bard into MidJourney, along with “Arcology style –v 5” (MidJourney)

It might not be just in a lore entry you use this. Maybe you need a background image for your marketing or a landscape shot for a blog post you’re writing. You can create these kinds of image, too. (In fact, we used MidJourney to create the cover image for this article.)

Volcano example midjourney

An island landscape with a volcano in the centre, in a mobile game screenshot style –v 5 (MidJourney)

As you can see, there are plenty of ways you can use AI to speed up your development process – and get to a prototype faster. Textures. In-game images. Icons. Character concepts. All of these could take months to develop by hand, but with AI you could do them all in just a couple of days.

Test out your ideas

Using AI, you can create lots of variations of your art and then A/B test to see which ones appeal to your players more. If you’d like some help tracking the results, you can use our live ops features for games. This includes A/B testing and Remote Configs so that you can switch out assets without having to release new versions of your games to the App Stores.

]]>
Creating concept art for games, with genAI https://gameanalytics.com/blog/creating-concept-art-for-games-with-ai/ Fri, 02 Jun 2023 19:52:41 +0000 https://gameanalytics.com/?p=21135

While it may not excel at everything yet, AI's prowess in concepting, storyboarding, and ideation has captured the industry's attention. Join us as we delve into the realm of GenAI, exploring its ability to create stunning concept art and assets for games.]]>

First of all, let’s face it, AI is not great at everything yet. But game designers, artists, and producers have adopted it rapidly because there are things it does really well. Concepting, storyboarding, and ideation are prime examples here. We’ve been doing our own exploring, and wanted to share with you the cases that have worked well for us.

We’re using examples from PicFinder, so you can explore and get inspired by each use case, starting directly from this article.

Environment concept art

Getting ideas for new games, concepts, worlds, and environments can be a time consuming concept. You can search for images, which can take hours. Or you can create them, which will take even longer. But 2D gen AI tools make environment ideation extremely fast, so you can explore many versions of a concept and storyboard your way into a crystal clear design.

Some examples of game environments with different formats.

Action games

Robot battle – tall scene

Large battlefield – wide scene

Casual games

Green garden scene – portrait format

Castle interior scene – tall format

Character concept art

Generating new characters prevents many teams from accelerating their release pipeline. Generating character assets is also essential for monetization and engagement, as players can unlock new characters through progression or payment. GenAI makes character iteration a lot faster, for a few different genres.

Fashion, makeover, and decoration games

Fashion makeover outfits – tall format

Portrait with eye makeup – portrait format

Cozy living room – square format

Cute, cozy, casual games

Explorer bear

Girl next door

Royal cat

What we learned along the way

1. Prompts matter a lot

Using longer prompts is better. But also thinking of prompts like a Google Image Search description more than how you would describe the item in real life seems to work well. Words like “high definition” or “perfect features” help get better quality art in general.

2. Starting from an image saves a lot of time

Many genAI tools allow you to upload an image and will then create new versions in that format. This can help a lot with initial iteration, especially if you’re looking for a very particular type of art. For fashion outfits or makeovers starting from an existing outfit image will help.

3. Different image formats generate different results

Searching for a character in tall format might generate a full-body image while in square format would create a portrait. The format changes the results a lot, so exploring with the exact format you need can help get closer to what you need faster.

How can we help?

If you’d like us to share more genAI resources, drop us a note to let us know. GenAI is on everyone’s minds and we love being a resource on anything that helps you make better games.

]]>
How AI Could Change The Way We Build Video And Mobile Games https://gameanalytics.com/blog/ai-mobile-games/ Thu, 02 May 2019 17:03:41 +0000 https://gameanalytics.com/?p=10034

“I’m sorry Dave. I’m afraid I can’t do that.” If you’re a sci-fi film fan or (ahem) of a certain age, that quote from HAL 9000, the sentient computer and antagonist of ‘2001: A Space Odyssey’, will likely have sent a small shiver down your spine. Artificial intelligence, or AI, has long been a go-to Hollywood villain – think Skynet, Ultron and, arguably the most evil of them all, Proteus IV from ‘Demon Seed’. Luckily, though, science fact hasn’t mirrored fiction (not yet, at least) and “friendly AI” is now in almost all our homes in the form of Siri, Cortana, Alexa and the like. It’s also used in more mundane processes like search engines, email filtering, and chatbots. However, AI in game design needs to catch up. Chances are you’ve heard lots about AI playing games – famous examples...]]>

“I’m sorry Dave. I’m afraid I can’t do that.”

If you’re a sci-fi film fan or (ahem) of a certain age, that quote from HAL 9000, the sentient computer and antagonist of ‘2001: A Space Odyssey’, will likely have sent a small shiver down your spine. Artificial intelligence, or AI, has long been a go-to Hollywood villain – think Skynet, Ultron and, arguably the most evil of them all, Proteus IV from ‘Demon Seed’.

Luckily, though, science fact hasn’t mirrored fiction (not yet, at least) and “friendly AI” is now in almost all our homes in the form of Siri, Cortana, Alexa and the like. It’s also used in more mundane processes like search engines, email filtering, and chatbots.

However, AI in game design needs to catch up.

Chances are you’ve heard lots about AI playing games – famous examples include AlphaGo, the computer program that plays the board game Go, and Deep Blue, the chess-playing computer that beat world champion Garry Kasparov in 1997.

But there’s a huge difference between this and the AI used in actual games. In fact, game design is one area where AI is actually a bit behind the times. One of the reasons for this is because researchers are using these game-playing computers to understand how to train machines to perform complicated tasks in other, more lucrative, areas (which we’ll go over in more detail later). So this type of research is accelerating at a much faster rate than in-game AI.

There are other reasons why game design has fallen behind in this field. But before we get into those, let’s look at some AI basics.

A brief definition of AI

To simply put it, AI is the simulation of human intelligence by machines. By finding patterns in large amounts of data, we can train them to learn from experience and perform human-like tasks.

AI in games at the moment

If you’ve ever played a video game, then you’ve interacted with AI in some form or another. At the moment, there are two core components to game AI.

  • Pathfinding: this is used in all games and is where AI plots the shortest route between two points. A good example is the ghosts in ‘Pac-Mac’, which use pathfinding to decide which direction to go in.
  • Finite state machines: this lets designers define complex behaviors. It powers NPCs (non-playable characters) in games, especially in open-world RPGs like ‘Red Dead Redemption 2’ or ‘Zelda: Breath of the Wild’.

These two techniques have been around since the 1980s and 90s, and the way games developers use them hasn’t really changed much since then. Obviously, as processing power has improved they made them look more sophisticated – but the underlying principles and fundamental concepts have hardly changed.

In practice, this means that while bosses in tricky games like ‘Dark Souls’ are using a form of AI to anticipate what players are going to do next, they’re still following set patterns which most of us can overcome without too much difficulty. And even games like ‘No Man’s Sky’ which uses a technique called procedural generation to build an almost infinite number of planets (18 quintillion if you’re counting), are still using long-established programming techniques to do that. So why aren’t game developers taking advantage of developments in this area?

The ghost in the machine

Precisely because it learns, AI is inherently unpredictable – which makes it a disadvantage in gaming. Developers ultimately want to know what a player will experience. So if you put in something that’s constantly adapting and learning from the player, there’s a good chance unexpected things will happen. At worst, this could make your game unplayable.

Imagine if all the NPCs in ‘Skyrim’ remembered every ‘bad’ thing you’ve ever done in Tamriel. It’d be carnage. So game developers have largely stuck with the type of AI that powers those Pac-Man villains – which nowadays isn’t actually considered all that intelligent.

To put it bluntly, they need to get over this. Why? Because used properly, AI could fundamentally change the way games are designed and played.

3 things AI could bring to the gaming table

1. It could make it quicker to create games

First up, AI could speed up the time it takes developers to build levels and craft open-world environments. In time, it could even build entire games from scratch. This would mean bigger and better games with more sophisticated and complex environments in much less time. This could particularly benefit small, indie game designers with less resources.

2. It could make your games more personal

Developers could also use AI to make the rules of a game changeable – so the experience I have playing it could be completely different to yours. Games could even learn what individual players like and dislike, and adapt things to suit them as they’re playing, creating a completely personalized experience. Automated game design like this could mean that every time you sit down to play a game will be like the first time – because the game is constantly redesigning itself, no two play-throughs will ever be the same.

3. It could bring self-learning characters into the mix

And lastly, while it’s not likely to happen any time soon, one day we could get a self-learning character in a game. One that can change and grow in the same way that we humans do.

So what are we waiting for? AI is the future of game design, and one we can and should embrace.

AI tools that are already about

Feeling inspired? There are lots of AI tools out there which you can use to add new features to your games and apps. Here’s a small snapshot of five of them we’ve found online.

  • Caffe2 – developed by Facebook, Caffe2 aims to be an easy way to experiment with deep learning (where artificial neural networks learn from large amounts of data). It works across various platforms and integrates with Android Studio, Visual Studio and Xcode for mobile development.
  • Core ML – you can use this to integrate trained machine learning models into iOS apps. It’s been designed for on-device performance, which uses less memory and power.
  • ML Kit – made by Google, ML Kit offers the technologies the search engine uses to power its own experiences on mobile, and comes in both out-of-the-box solutions and custom models.
  • TensorFlow – an open-source software library for building machine learning models. It has flexible architecture which makes it easy to use on desktop, mobile and edge devices.
  • Cognitive Services – marketed as a way to use AI to solve business problems, Microsoft’s Cognitive Services lets you add intelligent algorithms to see, hear, speak, understand and interpret your user’s needs.
]]>
How Enemy AI Works In Dicey Dungeons https://gameanalytics.com/blog/enemy-ai-dicey-dungeons/ Tue, 05 Feb 2019 11:40:22 +0000 https://gameanalytics.com/?p=9759

Editor’s Note: this post was originally published by Terry Cavanagh, indie game designer currently working on Dicey Dungeons. To stay up to date with what Terry is currently working on, your can find him on Twitter, or follow his other projects on his website here.  For the past month or so, I’ve been tackling one of the biggest technical problems in my new game, Dicey Dungeons – improving the enemy AI enough for the final release of the game. It’s been pretty interesting, and lots of it was new to me, so I thought I’d write a little bit about it. First up, a sort of disclaimer: I’m not a computer scientist – I’m just one of those people who learned enough about programming to make video games, and then stopped learning anything I didn’t have to learn. I can usually muddle...]]>

Editor’s Note: this post was originally published by Terry Cavanagh, indie game designer currently working on Dicey Dungeons. To stay up to date with what Terry is currently working on, your can find him on Twitter, or follow his other projects on his website here

For the past month or so, I’ve been tackling one of the biggest technical problems in my new game, Dicey Dungeons – improving the enemy AI enough for the final release of the game. It’s been pretty interesting, and lots of it was new to me, so I thought I’d write a little bit about it.

First up, a sort of disclaimer: I’m not a computer scientist – I’m just one of those people who learned enough about programming to make video games, and then stopped learning anything I didn’t have to learn. I can usually muddle through, but a real programmer probably wouldn’t have approached all this the way I did.

I tried to write all this in a fairly high level approach in mind, so that hopefully the basic ideas all make sense to other non-programmers. But I’m for sure no expert on all this stuff, and if I’ve gotten any of the details wrong in explaining the theory, let me know in the comments – happy to make corrections!

Let’s start by explaining the problem!

The problem

If you’ve not played Dicey Dungeons, here’s a crash course: it’s a deckbuilding RPG, where each enemy has a selection of equipment cards that do different things. Also, they roll dice! They then place those dice on the equipment to do damage, or cause various status effects, or heal, or shield themselves from damage, or lots of other things. Here’s a simple example of a tiny frog using a big sword and a little shield:

A more complicated example: this Handyman has a spanner, which allows it to add two dice together (so 3 + 2 would give you a single 5, and a 4 + 5 would give you a 6 and a 3). It also has a Hammer, which “shocks” the player if they use a six on it, and a Pea Shooter, which doesn’t do much damage, but which has a “countdown” which persists across turns.

One more important complication: there are status effects which change what you can do. The most important of these are “Shock”, which disables equipment at random until you unshock it by using an extra dice on it, or “Burn”, which sets your dice on fire. When your dice are on fire, you can still use them – but it’ll cost you 2 health points. Here’s what a clever Handyman does when I shock and burn all his equipment and dice:

There’s more to it than that, of course, but that’s basically the gist of it!

So, the problem: how do you make an AI that can figure out the best thing to do on it’s turn? How does it know which burning dice to extinguish, which dice to use for unshocking and which dice to save for important equipment?

What it used to do

For a long time, my AI in Dicey Dungeons just had one rule: It looked at all the equipment from left to right, figured out the best dice to use on it, and used it. This worked great, until it didn’t. So, I added more rules.

For example, I dealt with shocking by looking at the unshocked equipment, and deciding what dice I would want to use on it when it was unshocked, then marking that dice as “reserved” for later. I dealt with burning dice by just checking if I had enough health to extinguish them, and choosing whether or not to do it by random chance.

Rule after rule after rule to deal with everything I could think of, and ended up with an AI that sorta kinda worked! Actually, it’s amazing how well this hodge-podge of rules held together – the AI in Dicey Dungeons might not have always done the right thing, but it was definitely passable. At least, for a game that’s still a work in progress.

But over time, this system of adding more and more rules to the AI really started to break at the seams. People discovered consistent exploits to get the AI to do stupid things. With the right setup, one of the bosses could be tricked into never actually attacking you, for example. The more rules I added to try to fix things, the more weird things would happen, as rules started to conflict with other rules, and edge cases started to crop up.

Of course, one way to fix this was to just apply more rules – work through each problem one by one, and add a new if statement to catch it. But I think that would have just been kicking the problem further down the road. The limitation this system had was that it was only ever concerned with this question: “What is my next move?”. It could never look ahead, and figure out what might happen from a particular clever combination.

So, I decided to start over.

The classic solution

Look up AI stuff for games, and likely the first solution you’ll come across is a classic decision making algorithm called Minimax. Here’s a video that explains how it’s applied to designing a Chess AI:

Implementing Minimax works like this:

First, you create a lightweight, abstract version of your game, which has all the relevant information for a particular moment in time of the game. We’ll call this the Board. For Chess, this would be the current position of all the pieces. For Dicey Dungeons, it’s a list of dice, equipment, and status effects.

Next, you come up with a value function – a way to measure how well the game is going for a particular configuration of the game – i.e. for a particular board. For Chess, maybe a board where all the pieces are in their initial positions is worth 0 points. A board where you have captured an enemy Pawn is maybe worth 1 point – and maybe a board where you’ve lost one of your own Pawns is worth -1 points. A board where you have your opponent in checkmate is worth infinity points. Or something like that!

Then, from this abstract board. you simulate playing all the possible moves you can make, which gives you a new abstract board. Then, you simulate playing all the possible moves from those boards, and so on, for as many steps as you want. Here’s an excellent illustration of that from freecodecamp.org:

What we’re doing is creating a graph of all the possible moves both players can make, and using our value function to measure how the game is going.

Here’s where Dicey Dungeons splits from Minimax: Minimax comes from mathematical game theory, and it’s designed to figure out the best series of moves in a world where your opponent is trying to maximise their score. It’s so named because it’s about trying to minimise your loss when your opponent plays so to as to maximise their gain.

But for Dicey Dungeons? I actually don’t care what my opponent is doing. For the game to be fun, you just want the AI do make moves that make sense – to figure out the best way to play their dice on their equipment to make it a fair fight. In other words, all I care about is the Max, not the Min.

Which means: for the Dicey Dungeons AI to make a good move, all I need to do is create this graph of possible moves, and look for the board which has the best score – then make the moves that lead to that point.

A simple enemy turn

Ok, examples! Let’s look at this frog again! How does it decide what to do? How does it know that it’s chosen action is the best one?

It basically just has has two options. Place the 1 on the broadsword and the 3 on the shield, or do it the other way around. It obviously decides that it’s better off putting that 3 on the sword than the 1. But why? Well, because it looked at all the outcomes:

Place the 1 on the sword and you end up with a score of 438. Place the 3 on it, and you end up with a score of 558. Great, ok! Then, I get a better score by placing the 3 on the Sword, done.

Where’s that score coming from? Well, the Dicey Dungeons scoring system currently considers:

  • Damage: The most important case – 100 points for every point of damage dealt.
  • Poison: An important status effect that the AI considers almost as important as damage – 90 points for each poison.
  • Inflicting other Status effects: Like Shock, Burn, Weaken, etc. Each one of these is worth 50 points.
  • Bonus status effects: Inflicting yourself with positive status effects like Shield, etc, is worth 40 points each.
  • Using equipment: Using any piece of equipment is worth 10 points – because if all else fails, the AI should just try to use everything.
  • Reducing countdowns: Some equipment (like the Pea Shooter) just needs a total value of dice to activate. So, the AI gets 10 points for every countdown point it reduces.
  • Dice Pips: The AI gets 5 points for every unused Dice Pip – so a 1 is worth 5, and a 6 is worth 30. This is intended to make the AI prefer not to use dice it doesn’t need to use, and does a lot to make its moves look more human like.
  • Length: The AI loses 1 point per move, making it so that long moves have very slightly lower scores than short ones. This is so that if there are two moves that would otherwise have the same score, the AI will pick the shorter one.
  • Healing: Worth just 1 point per health point healed, because while I want the AI to consider it in a tie break, I don’t want it to be preoccupied with it. Other things are always more important!
  • Bonus score: Bonus score can be applied to any move, to trick the AI into doing something they might not otherwise decide to do. Used very sparingly.

Finally, there’s also two special cases – if the target of the attack is out of health, that’s worth a million points. If the AI is out of health, that’s worth minus a million points. These mean that the AI will never accidentally kill themselves (by extinguishing a dice when they have very low health, say), or never pass up a move that would kill the player.

These numbers aren’t perfect, for sure – take, for example, these currently open issues: #640#642#649 – but it actually doesn’t matter that much. Even roughly accurate numbers are enough to incentivise the AI to more or less do the right thing.

Harder enemy turns

The frog case is simple enough that even my shoddy code can figure out every single possibility in 0.017 seconds. But, then things get a bit more complicated. Let’s look at that Handyman again.

It’s decision tree is, uh, a little more complicated:

Unfortunately, even relatively simple cases explode in complexity pretty quickly. In this case, we end up with 2,670 nodes on our decision graph to explore, which takes quite a bit longer to figure out than the frog did – maybe as much as a second or two.

A lot of this is combinatorial complexity – for example, it doesn’t matter which of the 2s we use to unshock the equipment initially, this algorithm considers them as two separate decisions, and creates a whole tree of branching decisions for both. This ends up with a branch that’s a totally unnecessary duplicate. The are similar combination problems with deciding which dice to extinguish, which equipment to unshock, what dice to use in what order.

But even spotting unnecessary branches like this and optimising them (which I’ve been doing to some extent), there is always going to be a point where the complexity of the possible permutations of decisions leads to huge, slow decision trees that take forever to figure out. So, that’s one major problem with this approach. Here’s another:

This important piece of equipment (and things like it) cause a problem for the AI, because they have an uncertain outcome. If I put a six in this, maybe I’ll get a five and a one, or I might get a four and two, or maybe I’ll get two threes. I won’t know until I do it, so it’s really hard to make a plan that takes this into account.

Thankfully, there is a good solution to both of these problems that Dicey Dungeons uses!

The modern solution

Monte Carlo Tree Search (or MCTS, for short) is a probabilistic decision making algorithm. Here is a, uh, slightly odd video which nevertheless explains the idea behind Monte Carlo based decision making really well:

Basically, instead of graphing out every single possible move we can make, MCTS works by trying out sequences of random moves, and then keeping track of the ones that went the best. It can magically decide which branches of our decision tree are the “most promising” thanks to a formula called the Upper Confidence Bound algorithm:

That formula, by the way, is from this very helpful article on Monte Carlo Tree Searches. Don’t ask me how it works!

The wonderful thing about MCTS is that it can usually find the best decision without having to brute force everything, and you can apply it to the same abstract board/move simulation system as minimax. So, you can kinda do both. Which is what I’ve ended up doing for Dicey Dungeons. First, it tries to do an exhaustive expansion of the decision tree, which usually doesn’t take very long and leads to the best outcome – but if that’s looking too big, it falls back to using MCTS.

MCTS has two really cool properties that make it great for Dicey Dungeons:

  • One – it’s great at dealing with uncertainty. Because it’s running over and over again, aggregating data from each run, I just let it simulate uncertain moves like using a lockpick naturally, and over repeated runs, it’ll come up with a pretty good range of scores of how well that move will work out.
  • Two – it can give me a partial solution. You can basically do as many simulations as you like with MCTS. In fact, in theory, if you let it run forever, it should converge on exactly the same result as Minimax. More to the point for me, though – I can use MCTS to generally get a good decision out of a limited amount of thinking time. The more searches you do, the better the “decision” you’ll find – but for Dicey Dungeons, it’s often good enough to just do a few hundred searches, which only takes a fraction of a second.

Some cool tangents

So, that’s how the enemies in Dicey Dungeons decide how to kill you! I look forward to introducing this in the upcoming version v0.15 of the game!

Here are some tangential thoughts that I don’t really know where to put:

Those graphs I’ve been showing gifs of? Including this one on twitter:

Sure, the neighbours seem to be really enjoying their party, but the REAL fun is going on here: spent the evening hacking together a GraphML exporter for Dicey Dungeons’ new AI! Now I can explore enemy moves and actually see what’s going on step-by-step! #screenshotsaturdaypic.twitter.com/EeCwUz2NBK

— Terry (@terrycavanagh) November 25, 2018

I created these by writing an exporter for GraphML, which is an open source graph file format that can be read with many different tools. (I’ve been using yEd, which is great and which I can recommend a lot.)

Also! Part of making this all work was figuring out how to let the AI simulate moves, which was a big puzzle in and of itself. So, I ended up implementing an action scripting system. Now, when you use a piece of equipment, it runs these tiny little scripts that look like this:

These little scripts are executed by hscript, a haxe based expression parser and interpreter. This was definitely kind of a pain to implement, but the payoff is great: it makes the game super, super modable. I’m hoping that when this game finally comes out, people will be able to use this system to design their own equipment that can do basically any cool thing they can think up. And, even better, because the AI is smart enough to evaluate any action you give it, enemies will be able to figure out how to do whatever weird modded equipment you give it!

Thanks for reading! Happy to answer any questions or to clarify any of this in the comments below!

(And, finally, if you’re interested in playing Dicey Dungeons, you can get alpha access on itch.io right now, or if you prefer, wishlist us on steam, which will send you a little reminder when the game comes out.)

]]>