problems BD filled [closed]

0

My doubt is that I do when my BD is full for example I use postgresql and I already have a lot of information, since it is used to store information every 5 minutes when performing a continuous monitoring of the local network of the company, as well as I need to keep all this data because I need to keep a history. The system has been running for 3 years now, but every query that made the answer is very slow Is it okay to use that BD manager? How can I solve this slowness of the consultations? Have you ever developed or been part of an identical project?

    
asked by Mach1 01.03.2018 в 16:04
source

2 answers

1

I suggest you use the VACUUM command, which can improve the performance of the database.

About the DELETE and UPDATE commands:

  • The DELETE command does not physically delete records from tables, but marks them as useless for performance reasons;
  • The UPDATE command does not physically edit the records in the tables, but creates a new record and marks the old record as useless.

How can the VACUUM command help you?

  • VACUUM ANALYSE : update the storage statistics of the tables that will improve the performance of the queries;
  • VACUUM FULL : physically delete records marked as useless, which cleans the table. This command blocks the tables.

More details on the PostgreSQL documentation and an explanation of how Compact PostgreSQL database .

    
answered by 01.03.2018 в 16:56
0

If every time you add information or as time passes your database is slower, there are several reasons that can cause this but it is not that your database is full. Here are some tips to do it faster:

Tus tablas siempre deben tener una llave entera.
Minimizar en la medida de lo posible las llaves compuestas.
Si los querys ademas de resolver por las llaves también usan otros campos en las consultas, conviene indizar dichos campos.
Las clausulas AND en la parte del where deben ir acomodados de tal forma que el primer AND obtenga el conjunto mas pequeño de datos, esto para que se resuelva el query mas rapido.

I leave this link that explains how to choose primary keys in databases .

I advise you to review the costs of your queries with the explain statement of postgres for you can find out which one (s) takes more and you can index them and solve your problem.

Finally if you save binary files in the database in this thread you can check many points to consider so that the performance of your application is not affected when saving in database, if not choose the fylesystem.

Personally I think you should start with the indices and the costs of the queries since it is the most likely.

    
answered by 01.03.2018 в 17:31