IncognitoErgoSum

0 Followers
0 Following
97 Posts

I'm guessing that when you're losing an argument, you like to post a response and then block the other person so you get the last word, then convince yourself that the other person was a "sealion" or something. Reddit's block system is primarily used that way. If you don't like how blocking works here, I recommend Reddit.

I personally came here to get away from Reddit's "features" like private downvotes and silencing people who disagree with you, because they promote exactly the kind of toxic discussion I want to avoid.

If you're being harassed, report it.

Unfortunately, you pretty much have to specify a specific time and place for it to be actionable. These guys are very familiar with how those laws work and know exactly how to avoid getting caught by them.

I don't believe that current AIs should have rights. They aren't conscious.

My point is was purely that AIs learn concepts and that concepts aren't copyrightable. Encoding concepts into neurons (that is, learning) doesn't require consciousness.

I'm willing to, but if I take the time to do that, are you going to listen to my answer, or just dismiss everything I say and go back to thinking what you want to think?

Also, a couple of preliminary questions to help me explain things:

What's your level of familiarity with the source material? How much experience do you have writing or modifying code that deals with neural networks? My own familiarity lies mostly with PyTorch. Do you use that or something else? If you don't have any direct familiarity with programming with neural networks, do you have enough of a familiarity with them to at least know what some of those boxes mean, or do I need to explain them all?

Most importantly, when I say that neural networks like GPT-* use artificial neurons, are you objecting to that statement?

I need to know what it is I'm explaining.

Except an AI is not taking inspiration, it's compiling information to determine mathematical averages.

The AIs we're talking about are neural networks. They don't do statistics, they don't have databases, and they don't take mathematical averages. They simulate neurons, and their ability to learn concepts is emergent from that, the same way the human brain is. Nothing about an artificial neuron ever takes an average of anything, reads any database, or does any statistical calculations. If an artificial neural network can be said to be doing those things, then so is the human brain.

There is nothing magical about how human neurons work. Researchers are already growing small networks out of animal neurons and using them the same way that we use artificial neural networks.

There are a lot of "how AI works" articles in there that put things in layman's terms (and use phrases like "statistical analysis" and "mathematical averages", and unfortunately people (including many very smart people) extrapolate from the incorrect information in those articles and end up making bad assumptions about how AI actually works.

A human being is paid for the work they do, an AI program's creator is paid for the work it did. And if that creator used copyrighted work, then he should be having to get permission to use it, because he's profitting off this AI program.

If an artist uses a copyrighted work on their mood board or as inspiration, then they should pay for that, because they're making a profit from that copyrighted work. Human beings should, as you said, be paid for the work they do. Right? If an artist goes to art school, they should pay all of the artists whose work they learned from, right? If a teacher teaches children in a class, that teacher should be paid a royalty each time those children make use of the knowledge they were taught, right? (I sense a sidetrack -- yes, teachers are horribly underpaid and we desperately need to fix that, so please don't misconstrue that previous sentence.)

There's a reason we don't copyright facts, styles, and concepts.

Oh, and if you want to talk about something that stores an actual database of scraped data, makes mathematical and statistical inferences, and reproduces things exactly, look no further than Google. It's already been determined in court that what Google does is fair use.

Losing their life because an AI has been improperly placed in a decision making position because it was sold as having more capabilities than it actually has.

I would tend to agree with you on this one, although we don't need bad copyright legislation to deal with it, since laws can deal with it more directly. I would personally put in place an organization that requires rigorous proof that AI in those roles is significantly safer than a human, like the FDA does for medication.

As for the average person who has the computer hardware and time to train an AI (bear in mind Google Bard and Open AI use human contractors to correct misinformation in the answers as well as scanning), there is a ton of public domain writing out there.

Corporations would love if regular people were only allowed to train their AIs on things that are 75 years out of date. Creative interpretations of copyright law aren't going to stop billion- and trillion-dollar companies from licensing things to train AI on, either by paying a tiny percentage of their war chests or just ignoring the law altogether the way Meta always does, and getting a customary slap on the wrist. What will end up happening is that Meta, Alphabet, Microsoft, Elon Musk and his companies, government organizations, etc. will all have access to AIs that know current, useful, and relevant things, and the rest of us will not, or we'll have to pay monthly for the privilege of access to a limited version of that knowledge, further enriching those groups.

Furthermore, if they're using people's creativity to make a product, it's just WRONG not to have permission or to not credit them.

Let's talk about Stable Diffusion for a moment. Stable Diffusion models can be compressed down to about 2 gigabytes and still produce art. Stable Diffusion was trained on 5 billion images and finetuned on a subset of 600 million images, which means that the average image contributes 2B/600M, or a little bit over three bytes, to the final dataset. With the exception of a few mostly public domain images that appeared in the dataset hundreds of times, Stable Diffusion learned broad concepts from large numbers of images, similarly to how a human artist would learn art concepts. If people need permission to learn a teeny bit of information from each image (3 bytes of information isn't copyrightable, btw), then artists should have to get permission for every single image they put on their mood boards or use for inspiration, because they're taking orders of magnitude more than three bytes of information from each image they use for inspiration on a given work.

The AI genie is here. What we're deciding now is whether we all have access to it, or whether it's a privilege afforded only to rich people, corporations, and governments.

I know a lot of people want to interpret copyright law so that allowing a machine to learn concepts from a copyrighted work is copyright infringement, but I think what people will need to consider is that all that's going to do is keep AI out of the hands of regular people and place it specifically in the hands of people and... #ai #copyright #tech

https://kbin.social/m/tech/t/186867

The AI genie is here. What we're deciding now is whether we all have access to it, or whether it's a privilege afforded only to rich people, corporations, and governments. - Technology - kbin.social

I know a lot of people want to interpret copyright law so that allowing a machine to learn concepts from a copyrighted work is copyright infringement, but I think what people will need to consider is that all that's going to do is keep AI out of the hands of regular people and place it specifically in the hands of people and...

Underextrusion issues with all-metal hotend, particularly with glitter filament (modified Ender 3 Pro), even after adjusting retraction

I've been having some difficulty underextrusion on my new all-metal hotend. I've set my retraction distance to 1.5mm (1.0 leaves strings), but on regular PLA I'm getting occasional layers that don't print very well (particularly if there's a lot of stopping and starting), and glitter PLA is an absolute disaster....

https://kbin.social/m/3dprinting@lemmy.world/t/117957

Underextrusion issues with all-metal hotend, particularly with glitter filament (modified Ender 3 Pro), even after adjusting retraction - 3dprinting - kbin.social

I've been having some difficulty underextrusion on my new all-metal hotend. I've set my retraction distance to 1.5mm (1.0 leaves strings), but on regular PLA I'm getting occasional layers that don't print very well (particularly if there's a lot of stopping and starting), and glitter PLA is an absolute disaster....

I don't want kbin to be a far-leftist echo chamber. I also don't want kbin to be a far-right echo chamber. I think it's perfectly reasonable to want to protect a community from extreme and hateful views, regardless of which side they come from, because those views tend to attract the type of horrible, toxic people such as yourself who advocate beating the shit out of people for being different in a harmless way.

Welcome to the real world, where people who are different from you exist and mind their own business. If you can't put up with people who don't affect you in any way, I don't think the rest of us owe it to you to put up with you, either. Go find a cesspit to wallow in.

Another fun fact: You can use it yourself. Just spread it on the floor where you have a bug infestation and it'll kill them. It's harmless if you don't ingest it (in fact, I believe there's even some that's food grade).

If you have bedbugs, you can surround your bed with it, and it's really effective. It also takes out cockroaches, I believe.