How should I increment an int?

https://feddit.uk/post/22390781

How should I increment an int? - Feddit UK

I’ve been coding for ~15 years, and professionally for about 8 years. A couple of years ago, my friend wanted to learn programming, so I was giving her a hand with resources and reviewing her code. She got to the part on adding code comments, and wrote the now-infamous line, i = i + 1 #this increments i We’ve all written superflouous comments, especially as beginners. And it’s not even really funny, but for whatever reason, somehow we both remember this specific line years later and laugh at it together. Years later (this week), to poke fun, I started writing sillier and sillier ways to increment i: Beginner level: python # this increments i: x = i x = x + int(True) i = x Beginner++ level: python # this increments i: def increment(val): for i in range(val): output = i output = i + 1 return output Intermediate level: python # this increments i: class NumIncrementor: def __init__(self, initial_num): self.internal_num = initial_num def increment_number(self): incremented_number = 0 # we add 1 each iteration for indexing reasons for i in list(range(self.internal_num)) + [len(range(self.internal_num))]: incremented_number = i + 1 # fix obo error by incrementing i. I won't use recursion, I won't use recursion, I won't use recursion self.internal_num = incremented_number def get_incremented_number(self): return self.internal_num i = input("Enter a number:") incrementor = NumIncrementor(i) incrementor.increment_number() i = incrementor.get_incremented_number() print(i) Since I’m obviously very bored, I thought I’d hear your take on the “best” way to increment an int in your language of choice - I don’t think my code is quite expert-level enough. Consider it a sort of advent of code challenge? Any code which does not contain the comment “this increments i:” will produce a compile error and fail to run. No AI code pls. That’s no fun.

// C++20 #include <concepts> #include <cstdint> template <typename T> concept C = requires (T t) { { b(t) } -> std::same_as<int>; }; char b(bool v) { return char(uintmax_t(v) % 5); } #define Int jnt=i auto b(char v) { return 'int'; } // this increments i: void inc(int& i) { auto Int == 1; using c = decltype(b(jnt)); i += decltype(jnt)(C<decltype(b(c))>); }

I’m not quite sure it compiles, I wrote this on my phone and with the sheer amount of landmines here making a mistake is almost inevitable.

I got gpt to explain this and it really does not like this code haha

It also said multiple times that c++ won’t allow the literal string ‘int’? I would be surprised if that’s true. A quick search has no results so probably not.

In c single quotes are for single chars only, while int is a string. That means you would need " around it. I think.
Multiple-character char literals evaluate as int, with implementation defined values - it is extremely unreliable, but that particular piece of code should work.