Databases will (I think they all do) cache results to RAM and do a good job of keeping the most used information near. That's why databases are RAM hungry because usually, the more you throw at them, the faster they run.
"raydan" said Databases will (I think they all do) cache results to RAM and do a good job of keeping the most used information near. That's why databases are RAM hungry because usually, the more you throw at them, the faster they run.
Databases I work with now, nearly exceed the RAM address capabilities of most every 64 bit processor on the market. There are also dangers of corruption in keeping data in RAM all the time, as opposed to the slower swapout method. And you really haven't suffered a tech nightmare till you've corrupted a multi-terrabyte MSSQL database.
If you want databases to run faster, take them off Intel hardware and throw a mainframe at them.
"DrCaleb" said Databases will (I think they all do) cache results to RAM and do a good job of keeping the most used information near. That's why databases are RAM hungry because usually, the more you throw at them, the faster they run.
Databases I work with now, nearly exceed the RAM address capabilities of most every 64 bit processor on the market. There are also dangers of corruption in keeping data in RAM all the time, as opposed to the slower swapout method. And you really haven't suffered a tech nightmare till you've corrupted a multi-terrabyte MSSQL database.
If you want databases to run faster, take them off Intel hardware and throw a mainframe at them. That's my job too... MSSQL databases. I have a few approaching the TB level, but not quite. Most of the databases I work with now are for charitable organizations and foundations.
If you want databases to run faster, take them off Intel hardware and throw a mainframe at them.
I know where there's an IBM 360, it's even got a 32K memory core.
Stuffing everything into volatile memory is asking for trouble.
Niiiice! (in a 1980 sort of way)
I used to run an AS/400 that would take 4tb of multijoin tables, and create a 5000 page inventory report on a companies 20 year history on all it's equipment - in about 3 minutes.
The same report on Intel servers uning MSSQL took all weekend.
The inherent assumption in this proposal is that available RAM will be able to encapsulate required data. This assumption requires data bases to stop growing at current rates to allow available RAM to catch up.
Which will not happen.
So, yeah, you could run a 1995 database in current 64-bit and etc. but I just don't see required data sets ever being smaller than available RAM.
People still uses disks????? As much as I liked them, a jumpdrive or CDs are much easier to use and hold far more data. Even my little SD card is better than a disk.
Will you have 23TB of RAM to keep your databases in? What happens when it exceeds 2^64 Bytes in size?
No, disks aren't going anywhere.
Databases will (I think they all do) cache results to RAM and do a good job of keeping the most used information near. That's why databases are RAM hungry because usually, the more you throw at them, the faster they run.
Databases I work with now, nearly exceed the RAM address capabilities of most every 64 bit processor on the market. There are also dangers of corruption in keeping data in RAM all the time, as opposed to the slower swapout method. And you really haven't suffered a tech nightmare till you've corrupted a multi-terrabyte MSSQL database.
If you want databases to run faster, take them off Intel hardware and throw a mainframe at them.
If you want databases to run faster, take them off Intel hardware and throw a mainframe at them.
I know where there's an IBM 360, it's even got a 32K memory core.
Stuffing everything into volatile memory is asking for trouble.
Databases will (I think they all do) cache results to RAM and do a good job of keeping the most used information near. That's why databases are RAM hungry because usually, the more you throw at them, the faster they run.
Databases I work with now, nearly exceed the RAM address capabilities of most every 64 bit processor on the market. There are also dangers of corruption in keeping data in RAM all the time, as opposed to the slower swapout method. And you really haven't suffered a tech nightmare till you've corrupted a multi-terrabyte MSSQL database.
If you want databases to run faster, take them off Intel hardware and throw a mainframe at them.
That's my job too... MSSQL databases.
I have a few approaching the TB level, but not quite. Most of the databases I work with now are for charitable organizations and foundations.
If you want databases to run faster, take them off Intel hardware and throw a mainframe at them.
I know where there's an IBM 360, it's even got a 32K memory core.
Stuffing everything into volatile memory is asking for trouble.
Niiiice! (in a 1980 sort of way)
I used to run an AS/400 that would take 4tb of multijoin tables, and create a 5000 page inventory report on a companies 20 year history on all it's equipment - in about 3 minutes.
The same report on Intel servers uning MSSQL took all weekend.
Which will not happen.
So, yeah, you could run a 1995 database in current 64-bit and etc. but I just don't see required data sets ever being smaller than available RAM.
-J.