Big O Notation quantifies algorithm efficiency based on input size. `O(n)` means linear time, like scanning an array. Pro-Tip: Understand Big O to pick the *right* data structure. A fast algorithm on the wrong structure is still slow!

#DataStructures #AlgorithmAnalysis #ComputerEngineering #TechStudent

150 years ago, Wilhelm von Bezold delivered the first lecture on electrical telegraphy at our #university! This laid the foundation for a discipline that continues to shape our daily lives to this day: #electrical and #computerengineering: http://go.tum.de/187885

📷A. Heddergott

150 Years of Electrical and Computer Engineering at TUM

In 1876, Wilhelm von Bezold delivered the first lecture on electrical telegraphy at the Technische Hochschule München - now known as TUM. This laid…

Senior Cloud/Infrastructure Engineer

Post a job in 3min, or find thousands of job offers like this one at jobRxiv!

jobRxiv

Mathematical Aspects Of Computer Engineering by V.P. Maslov (Ed.); K.A. Volosov ( Ed.)

The present collection of articles is the result of many years of research conducted by our team into various aspects of designing and building the component base of promising high-speed computational systems. The articles deal with the following topics: (a) the optimal design and functioning of parallel computational systems, (b) the optimal recognition of optical and acoustic fields in synthesizing an optimal dynamic analyzer, and (c) the modeling of nonlinear transfer processes in the component base of a computer. We discuss new mathematical methods that can be applied in solving specific problems arising in the construction of mathematical models for handling the above-mentioned three topics. Although various countries have developed devices and technological processes for creating new generations of computers, there is still no general theoretical approach. In this respect the present collection fills an important gap in the literature on the subject. All results set forth in this collection are new and obtained only recently.

Series: Advances in Science and Technology in the USSR
Mathematics and Mechanics Series

Translated from Russian by Eugene Yankovsky

You can get the book here and here

Credits to the original uploaders. This is a cleaned, optimised scan.

Follow us on

Twitter https://x.com/MirTitles

Mastadon https://mastodon.social/@mirtitles

Bluesky https://bsky.app/profile/mirtitles.bsky.social

Tumblr https://www.tumblr.com/mirtitles

Internet Archive https://archive.org/details/mir-titles

Fork us on gitlab https://gitlab.com/mirtitles

Table of Contents

Preface 7

1. Design of Computational Media: Mathematical Aspects
by Avdoshin, V. V., Belov, V.P., Maslov, and A. M. Chehotarev 9
1.0 A Brief Survey 9
1.1 The Theory of Linear Equations in Semi-modules 22
1.2 Analysis of Discrete Computational Media 72
1.3 Optimization Problems of Functioning of Computational Systems 106
1.4 Flexible Automatic Manufacturing of Computational Media 116
1.5 Algorithms for Solving the Generalized Bellman Equation 127
References 142

2. Design of the Optimal Dynamic Analyzer: Mathematical Aspects of Sound and Visual Pattern Recognition
by V.P. Belavkin and V. P. Maslov 146
2.0 A Brief Survey 146
2.1 Representation and Measurement of Acoustic Signals and Optical Fields 149
2.2 Optimal Detection and Discrimination of Acoustic Signals and Optical Field 173
2.3 Effective Measurement and Estimation of Parameters of Acoustic Signals and Optical Fields 209
References 236

3. Mathematical Models in Computer-component Technology: Asymptotic Methods of Solution
by V. G. Danilov, V. P. Maslov, and K. A. Volosov 238
3.0 A Brief Survey 238
3.1 Models of Stages of Production and the Functioning of Computer Components 240
3.2 Properties of Standard Equations 262
3.3 A Time-dependent Model of Thermal Oxidation of Silicon 275
3.4 Oxidation of Silicon in a Halogen-containing Medium 279
3.5 Models of Mass Transfer 299
3.6 Diffusion of Light in an Active Medium 320
3.7 Solution of Equations of the Ginzburg-Landau Type. Waves in Ferromagnetic Substances 355
3.8 Asymptotic and Characteristic Exact Solutions to Semi-linear and Quasilinear Parabolic and Hyperbolic Equations 358
References 382

Name Index 384
Subject Index 387

#1988 #asymptoticMethods #computationalMedia #computerEngineering #mathematicalModelling #sovietLiterature

I saw this on Mastodon and almost had a stroke.

@davidgerard wrote:

“Most of the AI coding claims are conveniently nondisprovable. What studies there are show it not helping coding at all, or making it worse

But SO MANY LOUD ANECDOTES! Trust me my friend, I am the most efficient coder in the land now. No, you can’t see it. No, I didn’t measure. But if you don’t believe me, you are clearly a fool.

These guys had one good experience with the bot, they got one-shotted, and now if you say “perhaps the bot is not all that” they act like you’re trying to take their cocaine away.”

First, the term is falsifiable, and proving propositions about algorithms (i.e., code) is part of what I do for a living. Mathematically human-written code and AI-written code can be tested, which means you can falsify propositions about them. You would test them the same way.

There is no intrinsic mathematical distinction between code written by a person and code produced by an AI system. In both cases, the result is a formal program made of logic and structure. In principle, the same testing techniques can be applied to each. If it were really nondisprovable, you could not test to see what is generated by a human and what is generated by AI. But you can test it. Studies have found that AI-generated code tends to exhibit a higher frequency of certain types of defects. So, reviewers and testers know what logic flaws and security weaknesses to look for. This would not be the case if it were nondisprovable.

You can study this from datasets where the source of the code is known. You can use open-source pull requests identified as AI-assisted versus those written without such tools. You then evaluate both groups using the same industry-standard analysis tools: static analyzers, complexity metrics, security scanners, and defect classification systems. These tools flag bugs, vulnerabilities, performance issues, and maintainability concerns. They do so in a consistent way across samples.

A widely cited analysis of 470 real pull requests reported that AI-generated contributions contained roughly 1.7 times as many issues on average as human-written ones. The difference included a higher number of critical and major defects. It also included more logic and security-related problems. Because these findings rely on standard measurement tools — counting defects, grading severity, and comparing issue rates — the results are grounded in observable data. Again, I am making a point here. It’s testable and therefore disproveable.

This is a good paper that goes into it:

In this paper, we present a large-scale comparison of code authored by human developers and three state-of-the-art LLMs, i.e., ChatGPT, DeepSeek-Coder, and Qwen-Coder, on multiple dimensions of software quality: code defects, security vulnerabilities, and structural complexity. Our evaluation spans over 500k code samples in two widely used languages, Python and Java, classifying defects via Orthogonal Defect Classification and security vulnerabilities using the Common Weakness Enumeration. We find that AI-generated code is generally simpler and more repetitive, yet more prone to unused constructs and hardcoded debugging, while human-written code exhibits greater structural complexity and a higher concentration of maintainability issues. Notably, AI-generated code also contains more high-risk security vulnerabilities. These findings highlight the distinct defect profiles of AI- and human-authored code and underscore the need for specialized quality assurance practices in AI-assisted programming.

https://arxiv.org/abs/2508.21634

The big problem in discussions about AI in programming is the either-or thinking, when it’s not about using it everywhere or banning it entirely. Tools like AI have specific strengths and weaknesses. Saying ‘never’ or ‘always’ oversimplifies the issue and turns the narrative into propaganda that creates moral panic or shills AI. It’s a bit like saying you shouldn’t use a hammer just because it’s not good for brushing your teeth.

AI tends to produce code that’s simple, often a bit repetitive, and very verbose. It’s usually pretty easy to read and tweak. This helps with long-term maintenance. But AI doesn’t reason about code the way an experienced developer does. It makes mistakes that a human wouldn’t, potentially introducing security flaws. That doesn’t mean we shouldn’t use for where it works well, which is not everywhere.

AI works well for certain tasks, especially when the scope is narrow and the risk is low. Examples include generating boilerplate code, internal utilities, or prototypes. In these cases, the tradeoff is manageable. However, it’s not suitable for critical code like kernels, operating systems, compilers, or cryptographic libraries. A small mistake memory safety or privilege separation can lead to major failures. Problems with synchronization, pointer management, or access control can cause major problems, too.

Other areas where AI should not be used include memory allocation handling, scheduling, process isolation, or device drivers. A lot of that depends on implicit assumptions in the system’s architecture. Generative models don’t grasp these nuances. Instead of carefully considering the design, AI tends to replicate code patterns that seem statistically likely, doing so without understanding the purpose behind them.

Yes, I’m aware that Microsoft is using AI to write code everywhere I said it should not be used. That is the problem. However, political pundits, lobbyists, and anti-tech talking heads are discussing something they have no understanding of and aren’t specifying what the problem actually is. This means they can’t possibly lead grassroots initiatives into actual laws that specify where AI should not be used, which is why we have this weird astroturfing bullshit.

They’re taking advantage of the reaction to Microsoft using AI-generated code where it shouldn’t be used to argue that AI shouldn’t be used anywhere at all in any generative context. AI is useful for tasks like writing documentation, generating tests, suggesting code improvements, or brainstorming alternative approaches. These ideas should then be thoroughly vetted by human developers.

Something I’ve started to notice about a lot of the content on social media platforms is that most of the posts people are liking, sharing, and memetically mutating—and then spreading virally—usually don’t include any citations, sources, or receipts. It’s often just some out-of-context screenshot with no reference link or actual sources.

A lot of the anti-AI content is not genuine critique. It’s often misinformation, but people who hate AI don’t question it or ask for sources because it aligns with their biases. The propaganda on social media has gotten so bad that anything other than heavily curated and vetted feeds is pretty much useless, and it’s filled with all sorts of memetic contagions with nasty hooks that are optimized for you algorithmically. I am at the point where I will disregard anything that is not followed up with a source. Period. It is all optimized to persuade, coerce, or piss you off. I am only writing about this because this I’m actually able to contribute genuine information about the topic.

That they said symbolic propositions written by AI agents (i.e., code) are non-disprovable because they were written by AI boggles my mind. It’s like saying that an article written in English by AI is not English because AI generated it. It might be a bad piece of text, but it’s syntactically, semantically, and grammatically English.

Basically, any string of data can be represented in a base-2 system, where it can be interpreted as bits (0s and 1s). Those bits can be used as the basis for symbolic reasoning. In formal propositional logic, a proposition is a sequence of symbols constructed according to strict syntax rules (atomic variables plus logical connectives). Under a given semantics, it is assigned exactly one truth value (true or false) in a two-valued logic system.

They are essentially saying that code written by AI is not binary, isn’t symbolically logical at all, and cannot be evaluated as true or false by implying it is nondisproveable. At the lowest level, compiled code consists of binary machine instructions that a processor executes. At higher levels, source code is written in symbolic syntax that humans and tools use to express logic and structure. You can also translate parts of code into formal logic expressions. For example, conditions and assertions in a program can be modeled as Boolean formulas. Tools like SAT/SMT solvers or symbolic execution engines check those formulas for satisfiability or correctness. It blows my mind how confidently people talk about things they do not understand.

Furthermore that they don’t realize the projection is wild to me.

@davidgerard wrote:

“But SO MANY LOUD ANECDOTES! Trust me my friend, I am the most efficient coder in the land now. No, you can’t see it. No, I didn’t measure. But if you don’t believe me, you are clearly a fool.”

They are presenting a story—i.e., saying that the studies are not disprovable—and accusing computer scientists of using anecdotal evidence without actually providing evidence to support this, while expecting people to take it prima facie. You’re doing what you are accusing others of doing.

It comes down to this: they feel that people ought not to use AI, so they are tacitly committed to a future in which people do not use AI. For example, a major argument against AI is the damage it is doing to resources, which is driving up the prices of computer components, as well as the ecological harm it causes. They feel justified in lying and misinforming others if it achieves the outcome they want—people not using AI because it is bad for the environment. That is a very strong point, but most people don’t care about that, which is why they lie about things people would care about.

It’s corrupt. And what’s really scary is that people don’t recognize when they are part of corruption or a corrupt conspiracy to misinform. Well, they recognize it when they see the other side doing it, that is. No one is more dangerous than people who feel righteous in what they are doing.

It’s wild to me that the idea that if you cannot persuade someone, it is okay to bully, coerce, harass them, or spread misinformation to get what you want—because your side is right—has become so normalized on the Internet that people can’t see why it is problematic.

That people think it is okay to hurt others to get them to agree is the most disturbing part of all of this. People have become so hateful. That is a large reason why I don’t interact with people on social media, really consume things from social media, or respond on social media and am writing a blog post about it instead of engaging with who prompted it.

Human-Written vs. AI-Generated Code: A Large-Scale Study of Defects, Vulnerabilities, and Complexity

As AI code assistants become increasingly integrated into software development workflows, understanding how their code compares to human-written programs is critical for ensuring reliability, maintainability, and security. In this paper, we present a large-scale comparison of code authored by human developers and three state-of-the-art LLMs, i.e., ChatGPT, DeepSeek-Coder, and Qwen-Coder, on multiple dimensions of software quality: code defects, security vulnerabilities, and structural complexity. Our evaluation spans over 500k code samples in two widely used languages, Python and Java, classifying defects via Orthogonal Defect Classification and security vulnerabilities using the Common Weakness Enumeration. We find that AI-generated code is generally simpler and more repetitive, yet more prone to unused constructs and hardcoded debugging, while human-written code exhibits greater structural complexity and a higher concentration of maintainability issues. Notably, AI-generated code also contains more high-risk security vulnerabilities. These findings highlight the distinct defect profiles of AI- and human-authored code and underscore the need for specialized quality assurance practices in AI-assisted programming.

arXiv.org

There is this YouTube video series about the #gameboy 's #hardware engineering and it's one of the best introductions to #CPU s and how CPUs process code with registers and stuff and It has been released 9 years ago and they LEFT US HANGING FOR THE REST

I NEED TO KNOW HOW THE GAMEBOY HANDLES RAM

PLEASE!

#computerengineering

Edit: link
https://youtu.be/RZUDEaLa5Nw?si=OY09lS6HMJCLFiHQ

The Game Boy, a hardware autopsy - Part 1: the CPU [PART 2 OUT NOW!]

YouTube

Hey! I'm JJ, a #computerengineering student, junior #cybersecurity analyst and #iot / #embedded nerd.

I am currently working on an embedded #Linux based Honeypot, and using it to learn about pure #C , #crossCompiling and #LinuxKernel

I am also very interested in #hardwareHacking , #malware , #revereEngineering and overall low level stuff.

Also am #FOSS lover.

Moved profiles form mastodon.social recently. I hope the fediverse gives me a healthier relationship with social media.

Hey! I'm JJ, a #computerengineering student, junior #cybersecurity analyst and #iot / #embedded nerd.

I am currently working on an embedded #Linux based Honeypot, and using it to learn about pure #C , #crossCompiling and #LinuxKernel

I am also very interested in #hardwareHacking , #malware , #revereEngineering and overall low level stuff.

#introduction

Job Alert

Junior Scientist (w/m/d) – Algorithmenentwicklung für innovative medizinische Software

Deadline: open until filled
Location: Austria - Vienna

https://www.academiceurope.com/ads/junior-scientist-w-m-d-algorithmenentwicklung-fur-innovative-medizinische-software/

#Biomedicalengineering #ComputerEngineering #ElectricalEngineering #informatics #JuniorScientist #hiring