Is there anything actually useful or novel about "AI"?

https://lemmy.ml/post/2331707

Is there anything actually useful or novel about "AI"? - Lemmy

Feel like we’ve got a lot of tech savvy people here seems like a good place to ask. Basically as a dumb guy that reads the news it seems like everyone that lost their mind (and savings) on crypto just pivoted to AI. In addition to that you’ve got all these people invested in AI companies running around with flashlights under their chins like “bro this is so scary how good we made this thing”. Seems like bullshit. I’ve seen people generating bits of programming with it which seems useful but idk man. Coming from CNC I don’t think I’d just send it with some chatgpt code. Is it all hype? Is there something actually useful under there?

It’s really good at filling in gaps, or rearranging things, or aggregating data or finding patterns.

So if you need gaps filled, things rearranged, data aggregated or patterns found: AI is useful.

And that’s just what this one, dumb guy knows. Someone smarter can probably provide way more uses.

Hi academic here,

I research AI - better referred to as Machine Learning (ML) since it does away with the hype and more accurately describes what’s happening - and I can provide an overview of the three main types:

  • Supervised Learning: Predicting the correct output for an input. Trained from known examples. E.g: “Here are 500 correctly labelled pictures of cats and dogs, now tell me if this picture is a cat or a dog?”. Other examples include facial recognition and numeric prediction tasks, like predicting today’s expected profit or stock price based on historic data.

  • Unsupervised Learning: Identifying patterns and structures in data. Trained on unlabelled data. E.g: “Here are a bunch of customer profiles, group them by similarity however makes most sense to you”. This can be used for targeted advertising. Another example is generative AI such as ChatGPT or DALLE: “Here’s a bunch of prompt-responses/captioned-images, identify the underlying way of creating the response/image from the prompt/image.

  • Reinforcement Learning: Decision making to maximise a reward signal. Trained through trial and error. E.g: “Control this robot to stand where I want, the reward is negative every second you’re not there, and very negative whenever you fall over. A positive reward is given whilst you are in the target location.” Other examples including playing board games or video games, or selecting content for people to watch/read/look-at to maximise their time spent using an app.

  • What do you think on calling it AI?

    So typically there are 4 main competing interpretations of what AI is:

  • Acting like a human
  • Thinking like a human
  • Acting rationally
  • Thinking rationally
  • These are from Norvig’s “AI: A Modern Approach”.

    Alan Turing’s “Turing Test” tests whether a given agent is artificially intelligent (according to definition #1). The test involves a human conversing with the agent via text messages, and deciding whether the agent is human or not. Large language models, a form of machine learning, can produce chatbot agents which pass this test. Instances of GPT4 prompted sufficiently to text an assessor for example. The assessor occasionally interacts with humans so they are kept sufficiently uncertain.

    By this point, I think that machine learning in the form of an LLM can achieve artificial intelligence according to definition #1, but that isn’t what most non-tech non-academic people mean by AI.

    The mainstream definition of AI is what we would call Artificial General Intelligence (AGI). This is an agent that meets a given one of Norvig’s criteria for AI across multiple scenarios and situations that they have never encountered before.

    Many would argue that LLMs like GPT4 do not meet the criteria for AGI because they are not general enough, unable to learn to play an Atari game for example, or to learn an entirely unseen language to fluency.

    This is the difference between an LLM and a fictional AGI like Glados or Skynet.

    Additionally forms of machine learning exist like k-means clustering, which identify related groups within a dataset as their only function. I would assert these are not AI, although a weak argument could be made that they are thinking “rationally” enough to meet definition #4.

    Then there are forms of AI which are not machine learning, such as heuristic agents - agents that are hard coding with reasoning by humans - such as the chess playing Stockfish, or the AI found in most video games.

    Ultimately AI can describe machine learning if “AI” is understood as something which meets one or more of Norvig’s definitions. But since most people say AI when they mean AGI, I think “machine learning” is a better term. Less undeserved hype, less marketing disinformation, and generally better at communicating what is being talked about.

    Artificial Intelligence: A Modern Approach - Wikipedia

    Thanks for taking your time and putting it in that laconic way.

    I don’t think the comparison with crypto is fair.

    People are actually using these models in their daily lives.

    I’m one of those that use it in my daily life.

    The current top comment says it’s “really good at filling in gaps, or rearranging things, or aggregating data or finding patterns.”

    So, I use Perplexity.ai like you would use Google. Except I don’t have to deal with shitty ads and a bunch of filler content. It summarizes links for me, so I can more quickly understand whatever I’m searching for. However, I personally believe it’s important to look directly at the sources once I get the summary, if only to verify the summary. So, in this instance, I find AI makes understanding a topic easier and faster than alternatives.

    As a graduate student, I use ChatGPT extensively, but ethically. I’m not writing essays with it. I am, however, downloading lecture notes as PDFs and having ChatGPT rearrange that information into outline. Or I copy whole chapters from a book and have it do the same. Suddenly, my reading time is cut down by like 45 minutes because it takes me 15 minutes to get output that I just copy and paste into my notes, which I take digitally.

    Honestly, using it like I do, it’s pretty clear that AI is both as scary as it sounds in some instances and not, in others. The concern with disinformation during the 2024 election is a real concern. I could generate essays with it with whatever conclusions I wanted. In contrast, the concern that AI is scary smart and will take over the world is nonsense. It’s not smart in any meaningful sense and doesn’t have goals. Smart bombs are just dumb bombs with the ability to hone in better on the target, it’s still has the mission of blowing shit up given to it by some person and inherent in its design. AI is the same way.

    Perplexity.ai

    Huh, this one looks pretty cool. Is it good enough to use as a default search engine, or would it still be better to leave google for it?

    It’s useful for when you want to go down a rabbit hole. It’s less useful for super specific stuff, like where to go if you want your nails done.
    Thank you for perplexity.ai, didn’t know about this one
    I’m currently working on my bachelor thesis and checked perplexity.ai out after I saw your comment. This is incredibly useful, thanks for sharing!
    I love revisiting comments like these every 4 years.
    And yet, people still don’t use crypto in their daily lives. How many years has it been?

    Reddit just tied karma to the blockchain lol

    Not saying it’s a good use, but lots of people are going to be using it now.

    People have actually used crypto to make payments. Crypto is valuable, but only when it’s widely adopted. Before you say something like “use a database,” you might take the time to understand what decentralized blockchains are accomplishing and namely removing a class of corruption from any information coordination tasks.
    Why bother with the overhead of blockchain when users centralise on a handful of banks exchanges.
    Exchanges only exist to convert away from the crypto. If that’s the standard money, they don’t live. They aren’t the banks of the blockchain. They are the intersection of fiat banks and the blockchain.

    Strongly disagree, some exchanges don’t even have fiat on-ramps.

    Blockchain is inefficient and pointless when users centralise on coinbase and binance.

    Yes, community list: lemmy.intai.tech/post/2182

    LLM’s are extremely flexible and capable encoding engines with emergent properties.

    I wouldn’t bank on them “replacing all software” soon but they are quickly moving into areas where classic Turing code just would not scale easily, usually due to complexity/maintainance.

    ML Community Web - Integrated AI

    # intai.tech Communities - [email protected] [/c/[email protected]] - news and events - [email protected] [/c/[email protected]] - prompting, chatgpt - [email protected] [/c/[email protected]] - Guides and HowTo’s - [email protected] [/c/[email protected]] - Developer discussion and resources - [email protected] [/c/[email protected]] - Show off your Projects - [email protected] [/c/[email protected]]- Usable Research papers - [email protected] [/c/[email protected]] - model training - [email protected] [/c/[email protected]] - Models - [email protected] [/c/[email protected]] - AI apps and tools - [email protected] [/c/[email protected]] - Ethics and Law - [email protected] [/c/[email protected]] - Application Security and Jailbreak/anti-Jailbreak - [email protected] [/c/[email protected]]- Generative Content, StableDiffusion, MidJourney, etc - [email protected] [/c/[email protected]] - HuggingFace - [email protected] [/c/[email protected]] - AI Bot Gallery - [email protected] [/c/[email protected]] - AI Made Things digital or otherwise - [email protected] [/c/[email protected]] - AI memes and AI made memes - [email protected] [/c/[email protected]] - Image Gen # Fediverse Communities ### ChatGPT / Bots - [email protected] [/c/[email protected]] (en) - [email protected] [/c/[email protected]] (en) ### Generative Art - [email protected] [/c/[email protected]] (en) - [email protected] [/c/[email protected]] (en) - [email protected] [/c/[email protected]] (en) - [email protected] [/c/[email protected]] (en) - [email protected] [/c/[email protected]] (en) ### AI Discussion - [email protected] [/c/[email protected]] (en) - [email protected] [/c/[email protected]] (en) - [email protected] [/c/[email protected]] (en) - [email protected] [/c/[email protected]] (en) - [email protected] [/c/[email protected]] (en) - [email protected] [/c/[email protected]] (it) - [email protected] [/c/[email protected]] (en) - [email protected] [/c/[email protected]] (?) - [email protected] [/c/[email protected]] (en) # Discord - OpenAgent [https://discord.gg/CSf6rccBWy] - Integrated AI [https://discord.gg/rRm7kwJ7td] - Integro Labs [https://discord.gg/k4ey9nd4cg] - HomoScripter [https://discord.gg/9C5ec9Eysk] - High-quality AI [https://twitter.com/i/lists/1656461564319535105] - LocalLLaMA [https://discord.gg/Y8H8uUtxc3] # Twitter Lists - Integrated AI [https://twitter.com/i/lists/1661246989927960579] - AI / ML [https://twitter.com/i/lists/1590958109073162241] - Ai Agents & Bots [https://twitter.com/i/lists/1643856283928502273] - Generative AI [https://twitter.com/i/lists/1649315100527046658] # Other - Chirper.ai [https://chirper.ai/]

    Yes. What a strange question…as if hivemind fads are somehow relevant to the merits of a technology.

    There are plenty of useful, novel applications for AI just like there are PLENTY of useful, novel applications for crypto. Just because the hivemind has turned to a new fad in technology doesn’t mean that actual, intelligent people just stop using these novel technologies. There are legitimate use-cases for both AI and crypto. Degenerate gamblers and Do Kwan/SBF just caused a pendulum swing on crypto…nothing changed about the technology. It’s just that the public has had their opinions shifted temporarily.

    It is extremely useful in the right circumstances. When people say it isn’t useful or that it’s ‘stupid’, they’re not looking at the proper use cases - every tool has good and bad ways to use it (you wouldn’t use a hammer to peel an apple).

    For example, we will soon have fully rendered smoke simulated at real time in 3D spaces (ie. video games) because we can calculate a small portion of how that smoke looks and then have AI guess what the rest looks like (with shockingly good results!)

    AI is not a fad, it’s not going away, it’s improving rapidly, and it is going to massively change our digital world within years.

    Opinion source: a professional programmer, game developer, and someone that thoroughly despises cryptocurrency

    AI is nothing like cryptocurrency. Cryptocurrencies didn’t solve any problems. We already use digital currencies and they’re very convenient.

    AI has solved many problems we couldn’t solve before and it’s still new. I don’t doubt that AI will change the world. I believe 20 years from now, our society will be as dependent on AI as it is on the internet.

    I have personally used it to automate some Excel stuff I do at work. I just described my sheet and what I wanted done and it gave me a block of code that did it. I had spent time previously looking stuff up on forums with no luck. My issue was too specific to my work that nobody seemed to have run into it before. One query to ChatGTP solved my issue perfectly in seconds, and that’s just a few online tool in its infancy.

    Cryptocurrencies didn’t solve any problems

    Well XMR solved one problem, but yeah the rest are just gambling with extra steps

    What problem is that? Genuinely asking.

    Traceability.

    Regular financial transfers, be they credit card, direct debit, straight-uo written cheques. Interac/E-transfer (I am Canadian, that’s an us thing) are all inherently tracable.

    XMR/Monero is not tracable, it’s specifically designed not to be, unlike Bitcoin and most other cryptocurrencies.

    Of course, shitheads consider that to be a problem, but fuck them, they’re shitheads; it’s a solution, to the problem they cause

    Thanks for the info. That’s quite the way to end a comment though.

    I mean it though.

    The people working for the DEA now are no better than the people working to enforce alcohol prohibition in 1919. It’d be nice if humanity would learn, with a hundred years to think about it, but the ruling class at least haven’t. They enforce poorly thought out puritanical laws, and the world would be better off without them.

    If I lived in America rather than Canada, which thank god I don’t, the DEA would happily kick down my door, shoot me, and then probably also shoot my wife, who doesn’t even partake of anything beyond alcohol, but would obviously be upset about my being shot.

    All cops are bastards, and should be torched with molotovs at any available opportunity. If they didn’t want to be bastards, they shouldn’t have signed up as cops; it’s not like they’re conscripts

    For me personally cryptocurrencies solve the problem of Russian money not being accepted anywhere because of one old megalomaniacal moron
    I work at a small business and we use it to write out dumb social media post. I hated doing it before. Sometimes I’ll write it myself still and ask chatgpt to add all the relevant emojis. I also think ai had the chance to be what we’ve always wanted from Alexa, assistant, and Siri. Deep system integration with the os will allow it to actually do what we want it to do with way less restrictions. Also, try using chatgpts voice recognition in the app. It blows the one built into your phone out of the water.

    We’ve been using it at my day job to help us outline ideas for our content writers. It writes garbage content on its own, but it is a decent tool for organizing ideas.

    At least that is what we use it for. I’m sure there are other valuable uses, but it is not as valuable (to me at least) as it has been made out to be.

    Would you say it is comparable for summarizing ideas as a spelling/grammar checker is at checking spelling/grammar?

    Helpful, but not close to perfect?

    I think that is a great way to look at it.
    You could ask AI to find antibiotics to kill antibiotic resistant bacteria. The bonus would be to give it a lab and drones to conduct actual tests.
    Isn’t that more of a usecase for genetic/evolutionary algorithms? Those are anything but new, however. I don’t really see much use of LLMs here, which is what the current “AI” trend is about.
    Just an example of asking AI to solve a complex problem instead of a human. I’m not a subject matter expert.
    I will give you just one example. Pharmaceutical companies often create aggregate reports where they have to process a large number of cases. Say, 5000. Such processing sometimes includes analysis of x-Ray or other images. Very specialized and highly paid people (radiologists) do this. It is expensive and is part of the reason why medicine prices are high. One company recently had a trial - if AI can do that job. Turns out it can. Huge savings for the company. And the radiologist lost their job. This is just one example of good and bad things that will and already are happening in our society due to AI.

    You know this personally or did you just read an article? My wife works in a pharmaceutical company. And if I learned one thing by her stories: there will always be some person responsible for decisions! I doubt the radiologist lost her/ his job.

    But downsized her/ his engagement. Either just doing an supervision and sanity check or used the tool by itself and increased productivity.

    Yes, personally. They did the trials for precision of processing.
    Good luck to them. Very brave to put their business critical decisions into the AI basket. FDA isn’t known for being humorous.
    Every large aggregate report contains errors. As long as the errors are small and do not impact conclusions, there is no “business critical” element. And of course, they are going to check the accuracy with real human beings, constantly. But I have no doubt that AI is capable to do this kind of work as good or even better than human beings. So yes, some radiologists will be remained employed, but you need like what? 20% of them? Less, as time goes?
    Yes and it should not be in a handful of companies and also be regulated up the ying yang. smartless.com/…/mit-professor-max-tegmark-live-in…
    Episodes | smartless

    smartless
    That tegmark guy is a good example of what I was talking about. That future of life institute he’s a part of has jaan tallinn as one of its founders; a person who is invested in AI companies. So I have a hard time telling what’s neutral information and what’s marketing

    He is not marketing anything except his awful news site and he answers everything very carefully. He talks about them being murder machines but can cure cancer, etc. He said it’s like fire in that it’s neither good nor bad. I say we try and control fire though.

    I was trying to find the NHK World show where they had 6 experts on to talk about he future but couldn’t find it. They had one guy saying AI is wonderful and perfect and will only do good. They had one woman saying, regulate, regulate, regulate that used to work for Google. The other 3 were using it all the time so liked it but were still worried about it. Couldn’t find it though. It was on last week if you want to give it a go.

    Yeah ill check it out
    It is a useful tool to do something that I already know the answer but too lazy to work out. E.g. generate dummy data

    First of all AI is a buzzword that’s meaning has changed a lot since at least the 1950s. So… what do you actually mean? If you mean LLM like ChatGPT, it’s not AGI that’s for sure. It is another tool that can be very useful. For coding, it’s great for getting you very large blocks of code prepopulated for you to polish and verify it does what you want. For writing, it’s useful to create a quick first draft. For fictional game senses it’s useful for “embedding a character quickly”, but again you likely want to edit it some even for say a D&D game.

    I think it can replace most first line chat based customer service people, especially ones who already just make stuff up to say something to you (we all have been there). I could imagine it improving call routing if hooked into speech recognition and generation - the current menus act like you can “say anything” but really only “work” if you’re calling about stuff you could also do with simple press 1,2,3 menus. ChatGPT based things trained on the companies procedures and data probably could also replace that first line call queues because it can seem to more usefully do something with wider issues. Although companies still would need to get their head out of their asses somewhat too.

    Where I’ve found it falls down currently is very specific technical questions, ones you might have asked on a forum and maybe gotten an answer. I hope it improves, especially as companies start to add some of their own training data. I could imagine Microsoft more usefully replacing the first few lines of tech support for their products, and eventually having the AI pass up the chain to a ticket if it can’t solve the issue. I could imagine in the next 10 years most tech companies having purchased a service from some AI company to provide them AI support bots like they currently pay for ticket systems and web hosting. And I think in general it probably will be better for the users, because for less than the cost of the cheapest outsourced front line support person (who has near 0 knowledge) you can have the AI provide pretty good chat based access to a given set of knowledge that is growing all the time, and every customer gets that AI with that knowledge base rather than the crap shoot of if you get the person who’s been there 3 years or 1 day.

    I think we are a long way from having AI just write the program or CNC code or even important blog posts. The hallucination has to be fixed without breaking the usefulness of the model (people claim guardrails on GPT4 make it stupider), and the thing needs to recursively look at it’s output and run that through a “look for bugs” prompt followed by a “fix it” prompt at the very least. Right now, it can write code with noticeable bugs, you can tell it to check for bugs and it’ll find them, and then you can ask it to fix those bugs and it’ll at least try to do that. This kind of needs to be built in and automatic for any sort of process - like humans check their work, we need to program the AI to check it’s work too. And then we might need to also integrate multiple different models so “different eyes” see the code and sign off before being pushed. And even then, I think we’d need additional hooks, improvement, and test / simulation passes before we “don’t need human domain experts to deploy”. The thing is - it might be something we can solve in a few years with traditional integrations - or it might not be entirely possible with current LLM designs given the weirdness around guardrails. We just don’t know.

    AI hasn’t really changed meaning since the 50s. It has always been the field of research about how to make computers perform tasks that previously were limited to only humans. The target is always moving because once AI researchers figure out how to solve one task with computers it’s no longer limited to humans anymore. It gets reduced to “just computations”.

    There’s even a Wikipedia page describing this phenomenon: https://en.wikipedia.org/wiki/AI_effect

    AGI is the ultimate goal of AI research. That’s when there’s no more tasks left that only humans can do.

    AI effect - Wikipedia

    I mean, you’re pointing out what I am - that over time AI has referred to very different technologies and capabilities.

    I've been using it at my job to help me write code, and it's a bit like having a soux chef. I can say "I need an if statement that checks these values" or "Give me loop that does x y and z" and it'll almost always spit out the right answer. So coding, at least most of the time, changes from avoiding syntax errors and verifying the exact right format and turns into asking for and assembling parts.

    But the neat thing is that if you have a little experience with a language you can suddenly start writing a lot of code in it. I had to figure out something with Ansible with zero experience. ChatGPT helped me get a fully functioning Ansible deployment in a couple days. Without it I'd have spent weeks in StackOverflow and documentation trying to piece together the exact syntax.

    You should try out Codeium if you haven’t. It’s a VSCode toolkit completely free for personal use. I’ve had better results with it than ChatGPT
    Codeium: AI Coding Autocomplete and Chat for Python, Javascript, Typescript, Java, Go, and more - Visual Studio Marketplace

    Extension for Visual Studio Code - The modern coding superpower: free AI code acceleration plugin for your favorite languages. Type less. Code more. Ship faster.

    I mean, AI can be used to design a lot of robust yet efficient structures. In engineering and architecture, with enough data, AI can generate designs for buildings, and parts that are not only sturdy but can be built with less resources along with other design considerations. There's a really cool nasa video where competitors are trying to 3D print structures for habitation in space.

    AI is also used in medicine to come up with new protein structures to create new medicine. It's also used in environmental sciences, to help predict earthquakes or monitor land use, etc.

    There's a lot of practical uses for AI.

    Crypto and AI can’t be compared at all. One is an extremely useful and revolutionary tool. The other is just pump & dump ponzi schemes for libertarians.

    AI has gone through several cycles of hype and winter. There's even a Wikipedia page for it: https://en.m.wikipedia.org/wiki/AI_winter

    Of course it's valuable to discuss the dangers and inequities of a new technology. But one of the dangers is being misled.

    AI winter - Wikipedia

    Just because it’s ‘the hot new thing’ doesn’t mean it’s a fad or a bubble. It doesn’t not mean it’s those things, but…the internet was once the ‘hot new thing’ and it was both a bubble (completely overhyped at the time) and a real, tidal wave change to the way that people lived, worked, and played.

    There are already several other outstanding comments, and I’m far from a prolific user of AI like some folks, but - it allows you to tap into some of the more impressive capabilities that computers have without knowing a programming language. The programming language is English, and if you can speak it or write it, AI can understand it and act on it. There are lots of edge cases, as others have mentioned below, where AI can come up with answers (by both the range and depth of its training data) where it’s seemingly breaking new ground. It’s not, of course - it’s putting together data points and synthesizing an output - but even if mechanically it’s 2 + 3 = 5, it’s really damned impressive if you don’t have the depth of training to know what 2 and 3 are.

    Having said that, yes, there are some problematic components to AI (from my perspective, the source and composition of all that training data is the biggest one), and there are obviously use cases that are, if not problematic in and of themselves, at very least troubling. Using AI to generate child pornography would be one of the more obvious cases - it’s not exactly illegal, and no one is being harmed, but is it ethical? And the more societal concerns as well - there are human beings in a capitalist system who have trained their whole lives to be artists and writers and those skills are already tragically undervalued for the most part - do we really want to incentivize their total extermination? Are we, as human beings, okay with outsourcing artistic creation to this mechanical turk (the concept, not the Amazon service), and whether we are or we aren’t, what does it say about us as a species that we’re considering it?

    The biggest practical reasons to not get too swept up with AI is that it’s limited in weird and not totally clearly understood ways. It ‘hallucinates’ data. Even when it doesn’t make something up, the first time that you run up against the edges of its capabilities, or it suggests code that doesn’t compile or an answer that is flat, provably wrong, or it says something crazy or incoherent or generates art that features humans with the wrong number of fingers or bodily horror or whatever…well then you realize that you should sort of treat AI like a brilliant but troubled and maybe drug addicted coworker. Man, there are some things that it is just spookily good at. But it needs a lot of oversight, because you can cross over from spookily good to what the fuck pretty quickly and completely without warning. ‘Modern’ AI is only different from previous AI systems (I remember chatting with Eliza in the primordial moments of the internet) because it maintains the illusion of knowing much, much better.

    Baseless speculation: I think the first major legislation of AI models is going to be to require an understanding of the training data and ‘not safe’ uses - much like ingredient labels were a response to unethical food products and especially as cars grew in size, power, and complexity the government stepped in to regulate how, where, and why cars could be used, to protect users from themselves and also to protect everyone else from the users. There’s also, at some point, I think, going to be some major paradigm shifting about training data - there’s already rumblings, but the idea that data (including this post!) that was intended for consumption by other human beings at no charge could be consumed into an AI product and then commercialized on a grand scale, possibly even at the detriment of the person who created the data, is troubling.

    I am super amateur with python and I don’t work in IT, but I’ve used it to write code for me that allows me to significantly save time in my work flow.

    Like something that used to take me an hour to do now takes 15-20 minutes.

    So as a nonprogrammer, im able to get it to write enough code that I can tweak until it works instead of just not having that tool.

    As a professional editor, yeah, it’s wild what AI is doing in the industry. I’m not even talking about chatGPT script writing and such. I watched a demo of a tool for dubbing that added in the mouth movements as well.

    They removed the mouth entirely from an English scene, fed it the line, and it generated not only the Chinese but generated a mouth to say it. It’s wild.

    Wow it's smooth too; I was expecting it to look like a creepy old Clutch Cargo cartoon.

    Who better to ask than ChatGPT itself. I threw your entire post at ChatGPT and here’s what it said:

    As an AI language model, I can understand your skepticism and concerns regarding the hype surrounding AI. It’s true that both cryptocurrency and AI have been subjects of intense hype and speculation in recent years. Let me try to address your concerns and shed some light on the practicality and usefulness of AI.

    Crypto Speculation vs. AI Investment: It's not uncommon for people who experienced gains or losses in one investment area, such as cryptocurrency, to look for new opportunities in other fields like AI. Some might see AI as the next big thing, leading to a surge in investments and hype around AI companies. However, AI and cryptocurrency are fundamentally different, and while there may be some overlap in investor interest, they are distinct technologies with their own set of risks and potential. AI's Practical Applications: AI has indeed shown remarkable promise in various domains. In the tech industry, AI is being applied to natural language processing, image recognition, recommendation systems, healthcare diagnostics, autonomous vehicles, and more. AI has the potential to improve efficiency, automate repetitive tasks, and enable innovative solutions to complex problems. However, it's crucial to understand that AI is not a magical solution for every problem, and its success depends on careful implementation and understanding its limitations. Generative AI: You mentioned seeing people generate bits of programming with AI. Generative AI models, like ChatGPT, can be useful for automating some programming tasks, creating code snippets, generating natural language descriptions, or even helping with creative writing. They can be valuable tools, especially for tasks that involve large amounts of repetitive text generation. AI Limitations: While AI has made impressive strides, it's essential to recognize its limitations. AI models like GPT-3 excel in pattern recognition and generating coherent text, but they lack true understanding and reasoning capabilities. They also might produce inaccurate or biased outputs if not carefully supervised or fine-tuned. Responsible Use of AI: Flashy demonstrations of AI capabilities can be attention-grabbing, but it's crucial to use AI responsibly and ethically. As with any powerful technology, there are potential risks, such as misuse or perpetuating harmful stereotypes. So, while there is genuine utility in AI, there must be a balance in understanding its capabilities and using it in ways that benefit society.

    In conclusion, AI is not all hype, and it has genuine practical applications in various industries. However, it’s crucial to approach AI investment and usage with a critical eye, understanding its limitations, and using it responsibly to unlock its true potential. Like any tool, it can be valuable when applied appropriately and with clear goals in mind.