This chart creates a Google Cloud Endpoints deployment and service on a Kubernetes cluster using the Helm package…
This chart creates a Google Cloud Endpoints deployment and service on a Kubernetes cluster using the Helm package manager. You need to enable Cloud SQL Administration API and create a service account for the proxy as per these instructions. The Cloud SQL Proxy allows a user with the appropriate permissions to connect to a Second Generation Cloud SQL database without having to deal with IP whitelisting or SSL certificates manually. It works by opening unix/tcp sockets on the local machine and proxying connections to the associated Cloud SQL instances when the sockets are used.
The Cloud SQL Proxy provides secure access to your Cloud SQL Second Generation instances without having to whitelist IP addresses or configure SSL.
Accessing your Cloud SQL instance using the Cloud SQL Proxy offers these advantages:
You do not need to use the proxy or configure SSL to connect to Cloud SQL from the App Engine standard or the flexible environment.
The Cloud SQL Proxy works by having a local client, called the proxy, running in the local environment. Your application communicates with the proxy with the standard database protocol used by your database. The proxy uses a secure tunnel to communicate with its companion process running on the server.
To use the proxy, you must meet the following requirements:
The IP address does not need to be accessible to any external address (whitelisted).
When you start the proxy, you provide it with the following sets of information:
The proxy startup options you provide determine whether it will listen on a TCP port or on a Unix socket. If it is listening on a Unix socket, it creates the socket at the location you choose; usually, the /cloudsql/ directory. For TCP, the proxy listens on localhost by default.
You can install the proxy anywhere in your local environment. The location of the proxy binaries does not impact where it listens for data from your application.
All of the example proxy invocations start the proxy in the background, so a prompt is returned. It is preferable to reserve that terminal for the proxy, to avoid having its output mixed with the output from other programs. Also, the output from the proxy can help you diagnose connection problems, so it can be helpful to capture in a log file.
You do not have to use /cloudsql as the directory for the proxy sockets. (That directory name was chosen to minimize differences with App Engine connection strings.) If you change the directory name, however, keep the overall length to a minimum; it is incorporated in a longer string that has a length limit imposed by the operating system.
Using the proxy to connect to multiple instances
You can use one local proxy client to connect to multiple Cloud SQL instances. The way you do this depends on whether you are using Unix sockets or TCP.
To connect the proxy to multiple instances, you provide the instance connection names with the -instances parameter, in a comma-separated list (no spaces). The proxy connects to each instance when it starts.
You connect to each instance using its socket, in the specified directory.
When you connect using TCP, you specify the port on your machine to use to connect to the instance, and every instance must have its own port. The mysql tool uses 3306 by default, but you can specify another port for it to use.
Google occasionally releases new versions of the proxy. You can see what the current version is by checking the Cloud SQL Proxy GitHub releases page. Future proxy releases will also be noted in the Google Groups Cloud SQL announce forum.
The Cloud SQL Proxy issues requests to the Cloud SQL API. These requests count against the API quota for your project.
The highest API usage occurs when you start the proxy; this is especially true if you use automatic instance discovery or the -projects parameter. While the proxy is running, it issues 2 API calls per hour per connected instance.
When you are using the Cloud SQL Proxy in a production environment, there are some steps you can take to ensure that the proxy provides the required availability for your application.
If the proxy process is terminated, all existing connections through it are dropped, and your application cannot create any more connections to the Cloud SQL instance with the proxy. To prevent this scenario, be sure to run the proxy as a persistent service, so that if the proxy exits for any reason, it is automatically restarted. This can be accomplished by using a service such as systemd, upstart, or supervisor. For the Windows operating system, run the proxy as a Windows Service. In general, the proxy should have the same uptime requirements as your application process.
There is no need to create a proxy process for every application process; many application processes can share a single proxy process. In general, you should run one proxy client process per workstation or virtual machine.
If you are using auto-scaling for virtual machines, ensure that the proxy is included in your virtual machine configuration, so that whenever a new virtual machine is started, it has its own proxy process.
It is up to you to manage how many connections your application requires, whether by limiting or pooling the connections. The proxy does not place any limitations on new connection rates or persistent connection count.
If you need to reduce the size of the proxy log, you can do so by setting -verbose=false when you start the proxy. Keep in mind, however, that doing so will reduce the effectiveness of the proxy output in diagnosing connection issues.
If you are running the proxy on an instance configured for High Availability, and a failover occurs, connections through the proxy are affected the same way as connections over IP: all existing connections are lost, and the application must establish new connections. However, no manual intervention is required; the application can continue using the same connection strings it was before.
Tell us about a new Kubernetes application
Never miss a thing! Sign up for our newsletter to stay updated.
Discover and learn about everything Kubernetes