| Website | https://www.healthdatanerd.org |
| https://www.linkedin.com/feed/ | |
| TikTok | @jessRmorley_ |
| Website | https://www.healthdatanerd.org |
| https://www.linkedin.com/feed/ | |
| TikTok | @jessRmorley_ |
Protecting against misuse without forfeiting the potential benefits is key The global market for artificial intelligence (AI) for healthcare is booming. Currently valued at $15bn (ÂŁ12bn; âŹ14bn), it is expected to near $200bn by 2030.12 Public sector spending on AI for healthcare is also on the rise. The UK government, for example, has already invested more than ÂŁ123m in AI for healthcare technologies.3 Of course, previous AI âsummersâ were followed swiftly by AI âwinters,â when the gap between expectations and the reality of AI grew too wide. This summerâs heat and longevity can largely be attributed to the advent of generative AI. Generative AI is a type of machine learning capable of generating data in a range of formats (including text, image, audio, video, or code) and adapting to new tasks in real time, following simple text based prompts. These capabilities make generative AI flexible, as one âmodelâ (for example, ChatGPT or DALL-E) can be used for a variety of tasksâincluding medical research tasksâwithout having to be retrained. This flexibility makes generative AI appealing âŠ
Although *immensely* frustrating, it's also somehow appropriate that whilst writing about how difficult it is to get AI to work in contexts where the underpinning infrastructure (hardware & software) doesn't have the req capacity, my laptop keeps crashing.
đđ»cloud & autosave