Dan Fernandez  

@danielfernandez@infosec.exchange
459 Followers
273 Following
275 Posts
Product Leader; ML + Cybersecurity
@some_natalie I find it effective to use LLM to generate a prompt to feed into another LLM to generate a Mermaid diagram. Full circle 🤣

Reflections from a week of extremely human conversations about AI:

There are a lot of reasons why I believe AI will not replace humans but one of the main ones is that it doesn’t matter how much you have in common with any one person; human interaction almost always leads to non-linear thinking. And that’s largely because while we all have humanity in common we all have a unique set of experiences.

To put it in more machine learning centric geeky terms:
Humans have all the same underlying “statistical model” but we all have uniquely different “training data” and that is what leads to a world where each of our own’s training data leads to us uniquely different “optimization functions”.

Quite frankly that’s what makes startups unique, cities dynamic and things fun.

Looking forward to exploring even more training data in upcoming conversations.

I sometimes consider venturing out of the Apple ecosystem, but then I just simultaneously updated my phone, tablet and laptop to the same point release simultaneously without a hiccup.

Happy Pi Day! For those who celebrate 🤓!

3.14159265358979323846264338327950288419716939937510582097494459230781640628620899862803482534211706798214808651328230664709384460955058223172535940812848111745028410270193852110555964462294895493038196442881097566593344612847564823378678316527120190914564856692346034861045432664821339360726024914127372458700660631558817488152092096282925409171536436789259036001133053054882046652138414695194151160943305727036575959195309218611738193261179310511854807446237996274956735188575272489122793818301194912983367336244065664308602139494639522473719070217986094370277053921717629317675238467481846766940513200056812714526356082778577134275778960917363717872146844090122495343014654958537105079227968925892354201995611212902196086403441815981362977477130996051870721134999999837297804995105973173281609631859502445945534690830264252230825334468503526193118817101000313783875288658753320838142061717766914730359825349042875546873115956286388235378759375195778185778053217122680661300192787661119590921642019893809525720106548586327886593615338182796823030195203530185296899577362259941389124972177528347913151557485724245415069595082953311686172785588907509838175463746493931925506040092770167113900984882401285836160356370766010471018194295559619894676783744944825537977472684710404753464620804668425906949129331367702898915210475216205696602405803815019351125338243003558764024749647326391419927260426992279678235478163600934172164121992458631503028618297455570674983850549458858692699569092721079750930295532116534498720275596023648066549911988183479775356636980742654252786255181841757467289097777279380008164706001614524919217321721477235014144197356854816136115735255213347574184946843852332390739414333454776241686251898356948556209921922218427255025425688767179049460165346680498862723279178608578438382796797668145410095388378636095068006422512520511739298489608412848862694560424196528502221066118630674427862203919494504712371378696095636437191728746776465757396241389086583264599581339047802759009946576407895126946839835259570982582262052248940772671947826848260147699090264013639443745530506820349625245174939965143142980919065925093722169646151570985838741059788595977297549893016175392846813826868386894277415599185592524595395943104997252468084598727364469584865383673622262609912460805124388439045124413654976278079771569143599770012961608944169486855584840635342207222582848864815845602850601684273945226746767889525213852254995466672782398645659611635488623057745649803559363456817432411251507606947945109659609402522887971089314566913686722874894056010150330861792868092087476091782493858900971490967598526136554978189312978482168299894872265880485756401427047755513237964145152374623436454285844479526586782105114135473573952311342716610213596953623144295248493718711014576540359027993440374200731057853906219838744780847848968332144571386875194350643021845319104848100537061

I’m seeing a lot of buzz around DeepSeek performance vs cost ratio also seeing claims that it cannot answer questions about tank events and bear cartoons, that’s odd unless specific funding is underreported. I’m not good with language models or geopolitics asking for a friend.

So the Honey browser extension was basically Adware / Malware that also performed e-commerce fraud? Yikes.

https://youtu.be/vc4yL3YTwWk

Exposing the Honey Influencer Scam

YouTube
.@CYBERWARCON FOMO has become too real… I’m going to have to make my way there in 2025 somehow.
@tasket Haha please expand…
I’m not an “Apple Fanboy” but I ‘m largely happy with their products. As an AI optimist, Apple Intelligence is what I hope AI never becomes. How can they get it so wrong? It’s great we have @perplexity_ai (research) @AnthropicAI (code) @midjourney (img) @openai (brainstorming)

The current and future state of AI workloads or at least based on the most recent discussions appear to converge around scaling challenges.

There is an ongoing debate about whether pre-training of large language models is facing a deceleration in improvement rates, possibly due to limitations in available data and optimization techniques. However, post-training and run-time inferencing continue to show progress, driven by improvements in data quality and memory efficiency.

As AI infrastructure evolves, there is a shift towards smaller, more efficient models, which can operate effectively on less powerful hardware. This is a good thing. While some companies are focusing on large cluster sizes for pre-training, the majority are trying to explore alternative architectures for inference to handle expanding context windows. This would allow for “less prompting” and even greater accesibility by all users (both consumer segment and enterprise segment).

My two cents are that the average enterprise will have to focus more on investing in better, more context aware agents and care more about inference than training expanding the demand for new hardware alternatives. We could all benefit from more optimization and openness

#ai #mlops #aiinfrastructure #ml #llms