What you are trying to do I suppose is to implement a singleton pattern for the connection, in which case it should be:
class ConexionBD
{
private static $instance;
private static function conectar()
{
$conn = new mysqli(
'localhost',
'root',
'pass',
'nombreBD');
if ($conn->connect_error) {
die('Error de Conexión ('.$conn->connect_errno.') '.$conn->connect_error);
}
return $conn;
}
public static function getConexion(){
if(!self::$instance) {
self::$instance = self::conectar();
}
return self::$instance;
}
}
And use the connection in other classes like
$conn = ConexionBD::getConexion();
That way you would make sure that when you call ConexionBD::getConexion()
many times, you always return the same connection, and in that way you can open a transaction in one method and commit in another method. On the other hand, if you really wanted to make sure that the connection was always the same, you would have to overwrite the magic_method
__clone
to avoid cloning the connection:
public function __clone() {
throw new Exception("No se puede clonar esta clase");
}
And, by the way, make sure that the constructor of the class is private so that nobody does, without wanting new ConexionBD()
private function __contruct() {}
Now, using the singleton pattern in PHP is considered an antipattern . For example, if you want to do unit tests, each test would use the same connection and be influenced by the operations that the previous test made. That contradicts the very concept of a unitary text.
There are no connection pools in PHP (in the sense that PHP sticks to a permanently open service that maintains several connections and persists between requests) but each incoming client will create its own connection and it will close automatically when Finalize the life cycle of the request. In this sense a sort of pooling happens as it says gbianchi , but it is not a persistent pool as it happens in an app of Python or Node.js. PHP recreates the app in each request.
You do not have how to make two clients that enter your app use the same connection, and that is actually good, because one client could open a transaction and another client invoke a method that executes a COMMIT. It would be bad enough for a third party to COMMIT your transaction when you are inserting bulk records. For the same reason, it is not really that terrible to instantiate a new connection in each controller. Again, as gbianchi said, this would allow you to manage two or more connections in the same controller and (if gimmicky) use a connection to read from one table and another to insert in another table.
$resultado = $conn1->query('SELECT nombre, apellido from clientes where region=1');
$stmt = $conn2->prepare("INSERT into clientes_region1 (nombre,apellido) VALUES (?,?)");
$stmt->bind_param('ss', $nombre, $apellido);
while ($row = $resultado->fetch_array()){
$nombre = $row['nombre'];
$apellido = $row['apellido'];
$stmt->execute();
}
By the principle of sole responsibility, the controller should open its connection (and optionally close it) without relying on a global connection.
If you need two methods of the same controller to use the same connection, then instantiate the connection as a variable of the class by doing __construct of the controller.
class Ejemplo {
private $conexion;
public function __construct() {
$this->conexión = new mysqli(...);
}
}
And if you need a class to pass your connection instance to another class, do it with dependency injection: instantiate the second class by passing to your constructor the connection instance that you already have.
class Padre {
private $conexion;
private $hijo;
public function __construct() {
$this->conexión = new mysqli(...);
$this->hijo = new Hijo($this->conexion);
}
}
That way, if you want to do unit tests of the class Hijo
, you create the connection in the test, instances the class, you execute the test, you close the connection.
EDIT Is it optional to close the connection?
Short answer: yes, it is optional. The connection will be closed when the request life cycle ends. PEEEERO ... there are border cases.
Suppose your application has very long requests. For example, read from a table of 500,000 records, put them in an array, process them, store them in a file, upload that file to a remote server, etc. (provided that your memory parameters and execution time limit support this) and that process lasts a total of 10 minutes.
In this case you need to have the connection open only to read those 500,000 records in the table. The rest of the 10 minutes is still open only because the request has not ended. The connection is "taken" and the database only supports a limited number of simultaneous connections (this is configurable, but the same is a finite number). For that use case it would be appropriate to close the connection as soon as you brought the data from the table to free connections from the database and thus not block other users.