It's often not a good idea to store images in the database its self
See the discussions on is it better to store images in a BLOB or just the URL? and Files - in the database or not?. Be aware that those questions and their answers aren't about PostgreSQL specifically.
There are some PostgreSQL specific wrinkles to this. PostgreSQL doesn't have any facilities for incremental dumps*, so if you're using pg_dump
backups you have to dump all that image data for every backup. Storage space and transfer time can be a concern, especially since you should be keeping several weeks' worth of backups, not just a single most recent backup.
If the images are large or numerous you might want to consider storing images in the file system unless you have a strong need for transactional, ACID-compliant access to them. Store file names in the database, or just establish a convention of file naming based on a useful key. That way you can do easy incremental backups of the image directory, managing it separately to the database proper.
If you store the images in the FS you can't easily? access them via the PostgreSQL database connection. OTOH you can serve them directly over HTTP directly from the file system much more efficiently than you could ever hope to when you have to query them from the DB first. In particular you can use sendfile() from rails if your images are on the FS, but not from a database.
If you really must store the images in the DB
... then it's conceptually the same as in .NET, but the exact details depend on the Pg driver you're using, which you didn't specify.
There are two ways to do it:
- Store and retrieve
bytea
, as you asked about; and
- Use the built-in large object support, which is often preferable to using
bytea
.
For small images where bytea is OK:
- Read the image data from the client into a local variable
- Insert that into the DB by passing the variable as bytea. Assuming you're using the
ruby-pg
driver the test_binary_values example from the driver should help you.
For bigger images (more than a few megabytes) use lo
instead:
For bigger images please don't use bytea
. It's theoretical max may be 2GB, but in practice you need 3x the RAM (or more) as the image size would suggest so you should avoid using bytea
for large images or other large binary data.
PostgreSQL has a dedicated lo
(large object) type for that. On 9.1 just:
CREATE EXTENSION lo;
CREATE TABLE some_images(id serial primary key, lo image_data not null);
... then use lo_import
to read the data from a temporary file that's on disk, so you don't have to fit the whole thing in RAM at once.
The driver ruby-pg
provides wrapper calls for lo_create
, lo_open
, etc, and provides a lo_import
for local file access too. See this useful example.
Please use large objects rather than bytea
.
* Incremental backup is possible with streaming replication, PITR / WAL archiving, etc, but again increasing the DB size can complicate things like WAL management. Anyway, unless you're an expert (or "brave") you should be taking pg_dump
backups rather than relying on repliation and PITR alone. Putting images in your DB will also - by increasing the size of your DB - greatly slow down pg_basebackup
, which can be important in failover scenarios.
? The adminpack offers local file access via a Pg connection for superusers. Your webapp user should never have superuser rights or even ownership of the tables it works with, though. Do your file reads and writes via a separate secure channel like WebDAV
.