Forums

Topic: The AI thread

Posts 1 to 7 of 7

skywake

I was watching this video from Nintendo Forecast and was a bit annoyed at how little he seemed to understand what "AI" is. And then I realised that I think that a lot of people just generally don't seem to understand what "AI" is. Then I thought hey, rather than being annoyed at how little people seem to understand it I should open up a discussion about it so people can better understand it. So here we are. BTW, don't zone out, read through. This is all a bit dry but it's also important and I think worth it

So first things first, lets clarify what AI is here. Generally, in the modern use of the term, AI is an algorithm that when you put in a bunch of examples and it creates an algorithm that calculates outputs from inputs. For example instead of writhing a formula that's something like y = 2x you'd say y = 0, x = 0, y = 2, x = 1 and it'd come up with some kind of internal formula that converts y into x. So when you then give it y = 4 if gives you an answer. Not necessarily the right answer, but an answer

Now anyone with half a brain here will understand what the problem is. With such little data how is it going to know what comes next? How will the above example know that the formula is y = 2x and not y = x^2 + x or if y = 2 x = 1 otherwise x = 0. Well, it doesn't, but it'll spit out an answer anyways. The "revolution" we're seeing ATM isn't really some magical change to this process. All we're seeing now is companies spending a lot of money feeding a lot of data into these programs. Then at the same time we're seeing hardware manufacturers build silicon specifically designed to run the kinds of algorithms this process spits out

That's the basics of this whole thing. What does it all mean? Well, basically, this kind of process is entirely useless for simple calculations. If you know y = 2x then you'll put in y = 2x and it'll be always right all the time and fast. And there are a lot of problems in computing with very robust mathematical solutions. But there are other problems that are very, very complex. Including the forum favourite, image upscaling

.... so, lets talk DLSS for a bit. There are upscaling algorithms that exist and have done for a while. If you want to turn four pixels into 16 there are ways to do it. Usually they involve some form of essentially drawing a line between the two points. So if you have a black pixel and a white pixel the middle pixel is grey. But oops, now the image is blurry. So you use a different algorithm that doesn't average out the values and now, oops, the grids don't line up and that powerline gets bigger and smaller. Now you could make this algorithm more and more complex, you can add edge detection and consider the previous frame and so on. Or..... you could feed a bunch of examples into a machine learning algorithm and have it spit out a scaling algorithm

And when you do that? It "knows" that dark line against a pale blue background should behave like a powerline because that's what it's been shown. So it should be the same width with a hard edge except when its on a diagonal where it shouldn't alias. It straight up invents detail that didn't exist in things like gravel textures because when you run at a higher resolution things like gravel have more detail. You can even do things like generate in-between frames where it "understands" that when a hand moves from point A to point B the frame in the middle has the hand is in the middle, not semi-transparent in both positions. Because that's how in-between frames behave

But all of this... it's not magic, it's not free. It still takes time to calculate and it's still just "guessing" the next item in the sequence. And a lot of these generative models you're seeing these days are, essentially, a kind of search engine. A search engine that instead of spitting out the raw result it spits out the results of a formula that calculates results based on training data. It's not generating an image of a turtle on a rollercoaster in the style of Ghibli. It's pulling from its "experience" of turtles, rollercoasters and Ghibli and spitting out an answer that intersects with those points using the completely nonsensical algorithm it has generated

Anywho, discuss!

Edited on by skywake

Some playlists: Top All Time Songs, Top Last Year
An opinion is only respectable if it can be defended. Respect people, not opinions

PikminMarioKirby

Honestly I don’t know much about AI but what you’re saying makes sense. I feel like there’s good and bad in AI, because it can definitely be taken too far. Like for instance, I don’t want AI NPCs or dialogue or artwork in any game. As long as it’s not taking information from outside sources I’m good. I’d rather a hand-crafted game, as too much AI can make a game feel ‘fake’. I don’t want Nintendo’s AI, for instance, to train it’s AI with some of Microsoft’s work (And not just because it’s Microsoft )

I only accept the every copy of Super Mario 64 is personalized AI

Edited on by PikminMarioKirby

Some of my favorite games are Paper Mario and TTYD, SM64, Luigi’s Mansion, Pikmin 1-4, Kirby Forgotten Land, and the DKC OG trilogy (especially the first 2). All on Switch besides LM. Nintendo please bring it back!

FishyS

When I saw the name of this thread I thought it was going to be one of those weird threads made by someone who just made an account 😆

The public AI discourse is so weird lately because when most people hear AI they think 'Artificial General Intelligence' but all the text and image AI tools which people got excited about recently (and then sick of) are just Large Language Models (LLMs) which are very very much not that. Given how much human-moderated data a lot of those techniques need to work well, their abilities will hit a wall unless some major algorithmic improvements are made.

That said, If you ignore the word 'AI' and just think of a machine learning model created for a specific application (such as Deep Learning Super Sampling), it is really just another step in automated computer processing like we have all used without really noticing our whole lives. Sure, it's cooler in some ways if a game has 100% hand painted art where every pixel was placed intentionally, but not much of digital art in games is really like that.

Different thought, but I've also noticed that most people who freak out about AI suddenly get a lot less excited and a lot more bored if you start spouting probability and linear algebra at them to explain how some of it actually works. 😝 I say that is jest, but people are literally less scared of things they find boring even if they don't understand it in either scenario.

Edited on by FishyS

FishyS

Switch Friend Code: SW-2425-4361-0241

skywake

I think the main pain point we're going to have to get through is the same one that we had to get through in the late 90s with the internet. This stuff isn't magic. You can't just change your processes to use this new tool and get a cheaper, better product automatically. You have to think about how you can use it, what it's good at and what it isn't good at

Right now we're seeing stuff like the GTA remasters where they just straight up ran the textures through an AI upscale and shipped it like that. No curation at all. Or sites that just take a news article and feed it through Chat GPT to avoid plagiarism claims. Or things like that AI music generation tool. All that stuff I feel is like the visible jank in this AI bubble. That bubble will burst once the novelty wears off

But when we come out the other side of that? We'll be left with the .com bubble equivalents of Amazon, Spotify, Google, Netflix. Where stuff like ChatGPT and DallE fit into all of that? I'm not entirely sure. But I suspect when it comes to gaming in particular the future is a bit more DLSS than GPT

Some playlists: Top All Time Songs, Top Last Year
An opinion is only respectable if it can be defended. Respect people, not opinions

Pastellioli

I also do not have good knowledge on AI, but I have heard a lot of opinions on it, at least in the art community. I think AI generated “art” became popular almost two years ago after people used an AI generator to make intentionally nonsensical and weird images for memes. I did think initially that AI generated artwork was fine, but after I read an article Nintendo Life made where they interviewed several video game artists on the use of AI in art and brought up several points (like the legality of some pictures) I started to think about it all the more negatively.

Before I read that article, I actually used some AI pictures as references and inspiration for drawings I made by hand. However, I heard from someone that most AI pictures and “art” are usually made from combining several images on the internet, and most generators do not credit the people they get the images from. Although I never did take the images and claimed them as my own, or directly copied the images, I still felt pretty bad about doing it and stopped using AI generated images for my art, since I felt like I was copying ideas that some people already made before me.

I do think AI, in regards to creativity and art, can be helpful, as it can spark creativity in artists to make new stuff and sort of give them a vision of what their ideas might look like when put in fruition, and I think AI can have some other benefits in other areas.

However, people have used AI generators to make “art” and use it for malicious purposes. I remember there was an art contest a few years ago and some guy won first place, but it turned out that the winner used a picture from an AI generator and passed it off as a real painting he made. I have seen artists have their art fed to generators to copy their art styles, and most of the time whenever someone kindly asks for the generators based on their art to be taken down, they always get insulted and harassed. Some people even intentionally use it to make people lose their jobs!

I prefer creative works made by real people instead of being generated by AI, since I feel that AI is sort of soulless since there isn’t a real person behind the work at all, and no passion really shines through if the work is made by an AI. It just feels fake and artificial in my opinion.

Edited on by Pastellioli

Viva happy!

I’m an elephant!

Currently playing: Rare Replay and Conker: Live and Reloaded

Current hyperfixation: Conker’s Bad Fur Day

skywake

@Pastellioli
As I said in my first post, the way these things work is that they take a bunch of examples of "this input -> this output" and create an algorithm that calculates inputs from outputs. So if you feed into a model images scraped from the internet and their descriptions? Fundamentally the output image isn't that different from just searching an image on the internet. So it should be treated the same way

There was this music generation tool that made a bit of a splash a few weeks back that people discovered was pulling it's data from Rate Your Music. So much so that if you pulled user tags from RYM and dropped them into the generator it would, more or less, just give you a track that sounded exactly like that artist. Which should surprise anyone because that's exactly how this stuff works

What I think gets lost a bit in all of this is that while this stuff is the flashy bit everyone notices there are other ways to use this stuff. As an example, consider the task of doing texture work for a game. You want to make high resolution textures for an organic surface and you want it non-repeating and tileable. Well, you could manually create all of the required assets..... or you could start with a style and to a bit of it before throwing it to an algorithm to expand on it. And possibly you could even make it so that this happens on hardware at runtime

Or you could go full classic "game AI" and have the input be the current game state, your health and status, the enemies health and status, your previous attack patterns and preferences. And have the output be enemy behaviour. And do that with an actual AI model rather than just being some predefined script

Edited on by skywake

Some playlists: Top All Time Songs, Top Last Year
An opinion is only respectable if it can be defended. Respect people, not opinions

FishyS

If you ignore all the (important) copyright/IP etc issues of data-assisted AI (and Skywake just gave an excellent example of making your own training set so those issues are irrelevant), I'm curious if people are still annoyed by the concept of AI making some of the game.

It's not 'AI' in the precise way we are talking about right now, but procedurally-generated levels or worlds (re: roguelikes, Minecraft, etc) are very common and I would argue morally the same as 'AI' (again, if you remove pesky intellectual property concerns). These models may not have been created with deep learning, but who cares which algorithm you use — fundamentally it's another way of making large parts of your games via computer rather than a human figuring out every little pixel and npc and tree placement.

I can understand procedurally-generated games not being some people's cup of tea art or gameplay-wise, but I've never really heard people call them immoral travesties which will destroy all game creation and cause everyone to lose their jobs (for example).

Edited on by FishyS

FishyS

Switch Friend Code: SW-2425-4361-0241

  • Page 1 of 1

Please login or sign up to reply to this topic