There is always a limit but it surely is much higher than 10.000
I already have been past that number. tho my current ain't that high.
I am sure it's at least 99.999 if not 999.999.
When a database entry is made you need to define how many (for an int)digits it has. so the number is always a many 9's number.
In the begining of the game there was a beta mod who posted pictures from there test server and I thought the KP limit for example being 999.999 KP
Storage in memory (RAM) is set by a binary number. When you program you pick the type of data you want to use for each type of information you are using in your program. These "data types," therefore, have a limit since you set aside only a certain amount of space for each of them. The size of the space is measured in bits -- which is binary -- so the space you set aside is based upon the binary amount you need. For instance since Boolean data can only be either 1 or 0 (off/on 1/0 and so on), the boolean data type sets aside one bit for it's storage space. A character (Char) has to be one of 256 characters and thus, since it takes 8 bits to store a character, it's one byte of space. A word is 16 bits, and a double-word is 32 bits. (all this is for 32bit programming, 64bit uses the same types but usually doubles their size limits). There are two things to take away from all this: 1) is that you set aside space based upon data type you say your program will use for this or that piece of information; and 2) all the space is measured in binary....which means the space you set aside will be n^2... So 1^2 =1 bit, 2^2 = 4 bit (a "nibble," believe it or not), 2^3=8 bits or a byte (so a nibble is half a bite! LOL, and we all thought programming couldn't be fun!). And the data types that match these? 1 Boolean, 4 is an Int (an integer), 8 a char (for character), 16 a word (I'll bet you could have predicted that from "character"), and 32 a "double word."
In any case, when you declare what a thing is in you program -- what type of data -- you are telling they system to set aside that amount of memory. And you can put things into that space up to the limit of that space. Unfortunately, sometimes programs get things larger than the space set aside. When that happens the number in the space "rolls over," back to 0. So you have to be sure to set aside enough space for the absolute largest value that thing can have.
Here's an example of the problem -- that wasn't! Remember Y2K? If you are younger than 25-30 you might not. The "Y2K scare" was over the problem of not enough space and hence a program rolling over to zero in some manner. Many thought when that happened the world would come to a halt, the stock market crash, your savings wiped out and all sorts of really, really bad stuff. It didn't happen of course, but the panic was quite real. I know one couple who quit their jobs and moved their family into the hills of West Virginia in a "compound" complete with all the things needed to ward off the roving mobs of poor, starving, citizens. In any case, the problem with the, so-called "roll-over" was that it wouldn't have happened in the system because 2000 is not a binary number, it's a decimal one. A binary number, the thing computers use to calculate things, takes up a given space for it's type, but only "rolls-over" when all the bits get to "1" Just like if add 1 to 99, it becomes 100, and the last two digits get rolled over to 0's. Since 2000 is equal to 11111010000 in binary, as you can see, it takes 11 bit to represent 2000. And since you can add one more to 11111010000 making it 11111010001, it doesn't roll over to 00000000000. So there was no way any computer would crash because it couldn't add one to 2000 without the memory space rolling over to all 0's. So the whole Y2K scare was a bit silly, as I spent countless hours on the phone and in person explaining to businessmen all over the place. But....
There could have been, and were, some problems with correcting the displaying of data.
Display data, at the time, was mostly done with something called ASCII. (There are actually several systems). ASCII stands for "American Standard Code for Information Interchange" Before the advent of graphics, the screen was divided into boxes, each box represented in memory by a single byte (8 bits). Each box contained a character and since the system for deciding what character to display had 256 available characters you just put the value of the character you wanted in the byte of memory set aside for that location on the screen, and the computer did the rest.
Now if you wonder, (and perhaps you don't?) one of the reasons the Internet started as a character based network of networks is that it takes about 50 times more information to tell the system how to draw a letter on a screen than to send the letter as a code and let the local system pick from it's hard wired list of ASCII symbols. In the days of 300 baud (that's, I think, about 300 bits per second compared with 1,000,000,000 per second today -- for each Gigabyte) - it was really a necessary thing. Keeping things very, very small because the the pipeline was very, very tiny.
Here's a real world example of what happens when people don't understand programming. The whole Y2K problem wasn't one based upon some binary number (computers compute in binary), rolling over so the value turned to 0, but upon the displaying and printing of numbers. There was only one real problem in how programs were written and that problem showed up only in printed or displayed documents. The underlying math didn't have a problem. Now if you wanted to print "2000" you just put "050", "048", "048", and "048" (decimal, but you could have used hexidecimal) in each of the four boxes were you wanted the number to be on screen. But some programmers, because when they wrote the programs computers were very, very weak and slow, decided they could speed things up if they just assumed the first 2 digits were "1" and "9" If you remember, 2000 in binary is 11111010000 so it would take 11 bits to represent in the program. But 99 is 1100011 in binary and if you are writing in 1990 you've got 10 years before you have to deal with the problem of what happens when you get to the year 2000? So you program the "19" to be automatic and carry on. and since every bit counted in 1990, the extra 4 bits really meant something each time you had to refer to that number. In any case, as we know now, storing just part of the value caused some displays to say 1900 instead of 2000 and that scared a lot of people.
How this relates to this thread is that the problem here is that we are trying to discuss the display programming choices, not actual calculating choices. A screen is still a finite space for display, and we usually display in decimal, so the "rollover points" are different than the underlying binary rollover points. On screen it's, 9, 99, 999, 9999, 99999, rolling over to 10, 100, 1000, 10000, and so on. If the screen designer didn't allow for more space to the left of the number the display will usually display the digits it can --- 00, 000, 0000, 00000 and so on, or it will deal with it in other ways. In the current FA if you collect more than 100 items at a time it changes the number of items to a "+" sign as the programmer didn't allow for 3 or more digits (even if the program can handle more than 99). This is also true when you get to 1,000 squads. You get a ">999x". That's how the programmer dealt with too many troops to display an accurate count in the space allocated on the screen.
No doubt many may found this a bit long winded? That's okay, hopefully if you didn't you enjoyed it.
AJ