Database DevOps and Code management with PostgreSQL

For PGSQL Phriday #004, Hettie Dombrovskaya challenged us to think about working with PostgreSQL as code, by ourselves or within teams, and how we keep track of the scripts and changes that are important to our day-to-day work. There are so many facets of this topic we could discuss, and Hettie gave a few really great prompts, but I’ll focus on two main areas: Database DevOps (generally) and then more specifically PostgreSQL scripts that I find helpful (and how to track them).

Let’s hop to it!

What if Joe joins the circus?

At the risk of sounding like a broken record, the longest tenure of my career, 15 years, was at a software company that used SQL Server dating back to the 1990s. (their usage of SQL Server, not the start of my employment! How old do you think I am?!?! 😱👴)

The only real schema management companies had at the time was a host of Entity Relationship tools like ER/Win (what we used), ER Studio, and others. Nobody was talking about Agile, DevOps (or Database DevOps), and application releases took months or more of effort.

Around 2010, Agile was taking hold and many changes were incorporated into “regular” application development processes to improve efficiency and the overall speed of product delivery. Unfortunately, database development didn’t quite follow suit in most organizations. At best, database changes began to be incorporated as ORM migrations, often at the expense of utilizing the database features to their fullest extent. It also meant that developers with little real database experience were modifying production databases through feature migrations without much oversight.

Oddly enough, we had the opposite problem. Our part-time DBA was still using the ER-based changeset process developed in the 1990s using bash scripts. We had to rely on him to incorporate all the changes each release, modify upgrade (manual migration) scripts to catch edge cases, ensure static data was up-to-date, and more. Aside from asking quite a lot of one person, and despite numerous attempts to get others up to speed on the process, it became clear that we were very exposed to a break in this link of the development chain. If Joe got hit by a bus joined the circus, database change management was going to be a major issue.

It was here, struggling to lead the development team in a growing SaaS business, managing hundreds of client databases, that I knew there must be a better way.

Database DevOps

“DevOps is the union of people, process, and products to enable continuous delivery of value to our end users.”

Donovan Brown, Microsoft

At the core, the biggest problem we needed to solve was making our database change process visible to the whole team. We had to modify our processes and the expectations we had of individual people within the team to deliver better, more consistent value at the data layer.

The tools we chose for our circumstances don’t apply completely to PostgreSQL because they were SQL Server specific. But the process and benefits of building a “database as code” mindset within the team accelerated our velocity and quality. And Joe didn’t have the weight of the world on his shoulders. 😀

A few principles that we found important and carry through in how I approach/teach database management to this day include:

  • Knowing the true database state is invaluable: One of the biggest issues I have with a migration-only process (whether SQL or ORM) is that it’s hard to validate the intended state of the database schema – the “source of truth”. If you’re tooling doesn’t provide a method for easily maintaining a record of the expected state at various milestones, you’ll never really know which schema to trust (or find when a production database has “drifted” from that source of truth)
  • All SQL is versioned and committed: Migrations, repeatable scripts, static data, and helper scripts need to be versioned and committed. This provides the same PR-led observability on changes and allows all developers to know where the “newest, best version” of a helper script can be found. When they’re strewn about across developer computers or in browser bookmarks, that’s a recipe for trouble.
  • CI/CD is for databases, too: Database changes should be automated and incorporated just like regular application code. Even when committed code that’s not put through a similar CI/CD process will eventually fall short to manual processes. If the toolset you’re using doesn’t help with this, consider moving toolsets. The pain now will reap greater ROI than you can probably see now.

I believe in the process of database development so much that I’ve recently joined Redgate, a leader in database DevOps and tooling for over two decades. Our flagship too, Flyway Enterprise, incorporates both state-based tracking with intelligent migration generation. Although I had been a Flyway Community user before (a migration-only toolset), the added benefits of tracking state, too, provides control over migrations and assurance that there is always a source of truth to verify against.

If your current toolset doesn’t allow you to work towards a state of committing changes and rolling them out in an automated, test-driven, PR-based way, find a tool that does and start taking small steps there. I promise that once you see a script work its way through dev, staging, and production environments, you’ll be amazed that you worked so long without that process.

And, if you don’t know where to get started, feel free to reach out through Twitter, LinkedIn, or email ([email protected] blog domain).

Helpful PostgreSQL Scripts

Great! Now that you have a process for managing your database changes and scripts that matter – what are those scripts and where do I find them. The answer to that is more tricky than I’d like.

PostgreSQL, the global community, and core “leadership” have put so much effort into documentation and examples over the years last few decades. Sometimes, however, it can be hard to remember that PostgreSQL is maintained by volunteers, even if some of them are being employed by various companies as contributors to the project. Maintaining pages and repositories of example scripts is challenging for the most committed teams, let alone a globally distributed set of “hackers” focused on turning out some pretty 💯 releases of PostgreSQL every year (for many years running)!!

That said, there have been a few attempts over the years to provide a place for this extra information, a few of which aren’t referred to often enough (IMO). One site that holds lots of information is the PostgreSQL Wiki. Below are a few pages that I’ve referred to over the years, some of which probably need to be updated and moved into some more conspicuous, “official” areas like documentation.

  • Don’t Do This: Still (mostly) relevant information about PostgreSQL features, data types, or setup in a “do this, not that” way.
  • Index Maintenance Scripts: Common starting place for finding duplicate or unused indexes.
  • Developer FAQ: Exactly as it says on the tin, many questions that are repeated often on mailing lists, Slack, and elsewhere. Continues to be maintained.
  • Performance Snippets: Database bloat, loose index scan, finding locks, and more!

In a real project, whenever I use one of these scripts as a starting point, we’ll commit it to a “helpers” repository or folder in the current project with comments that references the source, provide a URL, and if necessary, any blog post & Stack Overflow links that were relevant. Without this habit or expectation, many scripts die the death of an accidently closed IDE tab. 😬

Work to be an example

Most of the time I’m the least qualified developer in the PostgreSQL space to provide elegant examples. My Postgres knowledge is hard won (and hard work). But I love the platform, the community, and the constant improvements. Learning how to use data, harness the power of SQL and Postgres, and share it all with others is something I’m trying to get better at.

Whenever I give a presentation at a conference or meetup, I try to make the slides and any SQL available through my GitHub repo. More recently, I’ve started recording a video for each day of the Advent of Code as I’ve completed them, talking through my solutions and what I learned along the way.

I’d encourage you to do the same. Through the 2022 State of PostgreSQL Survey, we learned that many people in the PostgreSQL community prefer reading blog posts and watching videos. The more you contribute, the more likely your work will end up in the comment section of a script that helps a team use Postgres more effectively. 😉

Long-live Postgres!

Advent of Code 2022: Days 6-10 With PostgreSQL

I’m slowly working through the Advent of Code 2022 puzzles using PostgreSQL and SQL as much as possible. For an overview of how I’m approaching these, please see the first post in this series: Advent of Code 2022: Days 1-5 with PostgreSQL

Other resources:

AoC 2022: Day 6-10 with PostgreSQL

Day 6: Tuning Trouble

A fun puzzle, we were given a long string of characters which we then had to find sets of transmission markers by identifying non-repeating characters.

For Star 1, we had to find the “start-of-packet” marker by finding the first set of four non-repeating characters. After some thought, I chose to split the long string into smaller sets of four character arrays and then doing a DISTINCT to identify ones that had four distinct characters.

Star 2 was similar, only we had to identify the “start-of-message” marker which was 14 consecutive, non-repeating characters. I simply changed a few values to create arrays of 14 characters to solve this puzzle.

Advent of Code Day 6 with PostgreSQL

Day 7: No Space Left On Device

Directory structures and recursive CTEs. Whenever someone teaches about recursive CTEs they either use a manager/employee example or a directory structure, typical parent/child relationships. So while I felt like this should have been easy, for some reason I struggled to get the right relationship. Once I did (with some help), things fell into place.

Star 1 entailed creating the correct parent/child relationship so that we could sum the total file space of each directory in the tree. This meant that a parent fold was the sum of the files it contained and the sum of all child folders. Great usage of regexp_match and CARDINALITY to help solve this puzzle.

I kept getting close, but honestly couldn’t have solved it in a timely manor without seeing what someone else did to get the correct parent/child sums. Great learning experience.

Star 2 took the input from the first star to then figure out the smallest folder that could be deleted to free up enough space to do an update. I’m sure there were more elegant solutions, but I simply did some additional math and sub-selects to get the final answer.

Advent of Code Day 7 with PostgreSQL

Day 8: Treetop Tree House

Early in my career I was a teacher for 12-18 year old (middle and high school for U.S. folks). Part of my teach load was technology classes, both programming and basics like word processing and spreadsheets. I learned early on that most kids rarely read past the first line of a test question or the first choice in a spellcheck list (I got many papers addressed to Mr. Boo).

Day 8 was my own little reminder to read the full instructions. The puzzle instructions and examples were clear… I just didn’t read them carefully. This meant that I approached Star 1 wrong from the beginning and wasted some time. Once I realized I was wrong, my second try helped.

For Star 1 we had to look at a grid of numbers representing the height of trees to find the total number of visible trees. A tree was visible if it could be seen from any direction (row/column) all the way to the edge. The final solution used string_to_table (new in PostgreSQL 14+) and some basic WHERE predicates. I also saw a better solution from Vik Fearing using WINDOW functions which was so much simpler. 🤦‍♂️

Star 2 was a bit closer to what I was originally trying for the first star, but this time we had to find a “scenic score” for each tree using some basic math and a final ORDER BY.

Quick note: I realized after recording the video that one of my “clean up” edits on the SQL actually produced the wrong answer in the video because it relied on the identify column of my data starting at 1. I’ve updated the GitHub repo code to ensure it works regardless of the ID, but didn’t feel it was worth re-recording the video. Live and learn…

Advent of Code Day 8 with PostgreSQL

Day 9: Rope Bridge

I’m not going to lie on this one. Day 9 really got me. The basics of how to approach this puzzle seemed clear, but I once again missed an important detail. We had to track the movement of movements for the head of a rope and then adjust the tail of the rope to follow along based on the rules presented. “Simple enough,” I thought, “perfect use of a recursive CTE”. That did work in the end, but not without some trials. The solution for each puzzle was about tracking how many unique coordinates the tail visited during all of the movement.

Star 1 was only about two “knots”, a head and a tail. We were provided the movement of the head and then as the tail . The big thing I missed the first time around is that we had take one step at a time and adjust the tail accordingly. Instead, my first attempt simply made the total movement each time, which resulted in significantly under-counting the movement of the tail. Lesson learned.

Star 2 was just the first star… on steroids!! Now we had to track the movement of 10 knots while still only knowing where the first knot (the head) moved. I kept getting close, but because the movement of each knot was predicated on the movement of the knot before it, I kept missing one movement value. The simple formula that Vik used in his solution was the final key for me.

My final solution here used a multi-dimensional array with a recursive CTE to track the movement of each knot as things went. By cutting out additional work, I was able to cut my initial solution down from ~25 minutes to ~1.5 minutes.

Advent of Code 2022 Day 9 with PostgreSQL

Day 10: Cathode-Ray Tube

Honestly the second star has been my favorite answer so far. Getting there seemed simple enough, but off-by-one issues just kept annoying me! Once again I entered the puzzle with what seemed like a really easy solution, only to get complicated 10 minutes later. After solving the puzzle I saw others solved this much more easily with something like string_to_table and a WINDOW function. One of these days I’ll learn!

Star 1 tracked the value of an integer in the register as each CPU cycle clicked by given a set of instructions. Again, I got the answer, but not without overcomplicating it.

Star 2 was honestly just really fun. We had to use the values in the register from Star 1, using them to identify pixels on a small CRT that needed to be turned on. Once that was completed, we had to “draw” the screen to see a set of capital letters. Something about taking a table of data and calling string_agg to reveal the letters just felt fun.

Advent of Code 2022 Day 10 with PostgreSQL

Key Takeaways

I have no idea how long it will take me to try and work through all 25 puzzles. In ten “days”, I’m amazed at how much I’ve learned, both about how I approach this kind of problem and how there are always multiple ways to the same ends. Learning the strengths and weaknesses of your tool, PostgreSQL in this case, is essential to improving your knowledge for the next task.

And again, I’m thankful to others like Vik and Feike that are sharing their work so that I can learn more, too!

Advent of Code 2022: Days 1-5 With PostgreSQL

Advent of Code is a yearly coding challenge created by Eric Watsl (@ericwastl) that runs from December 1-25. I’ve seen it referenced over the last 6-7 years in various technical circles, but never registered and tried to follow along.

This year, I started to see Vik Fearing and Feike Steenbergen tweet about it, specifically solving the puzzles using PostgreSQL. The specific challenge they presented for themselves was to only use SQL and base functions provided by PostgreSQL, hoping to avoid functions or stored procedures that require a language like pl/pgsql.

The idea intrigued me and I decided to work alongside my fellow PostgreSQL community on AoC, learning all I could in the process. (spoiler alert… I’ve learned a lot already!).

As I go, I’ll upload my finished code to my repo and (hopefully) record a video for each day explaining my approach. I’ve finished through day 8 (as of December 14, 2022), but haven’t finished all of the videos yet. Again, my goal is to enjoy the challenge and learn, not try to be the first one done or get on a leaderboard somewhere. This also means it will probably take into January 2023 to finish the puzzles and record videos.

Enough explanation! Let’s talk about the first five puzzles, including links to the videos.

AoC 2022: Day 1-5 with PostgreSQL

Day 1: Calorie Counting

This was an interesting first start for SQL. I originally solved this using a complicated back and forth between a rows and arrays. The puzzle provided input that represented the calorie values for various food items that each elf had with them. The group of items for each elf differed, and the only way to determine the separation of groups was with an empty row in the text file.

The simpler solution was using a FILTER statement on an aggregation of the groups. (Thanks to Feike for that hint).

Advent of Code Day 1 with PostgreSQL

Day 2: Rock, Paper, Scissors

So far this has felt like the “easiest” puzzle for me. Although I probably could have completed it with less typing, the concept was pretty straight forward and accomplished with joining two related tables and adding the appropriate point amount.

Advent of Code Day 2 with PostgreSQL

Day 3: Rucksack Reorganization

There’s something about simple tasks like splitting strings and creating tables to solve a puzzle. Not the least of which is remembering that most of these tasks in PostgreSQL are 1-based when splitting a string in half doesn’t quite work out the first time. 😀 In the end, it was simplest to create couple of tables with a CTE and join to find the “uniqueness required to “shared items” and solve the puzzle.

Advent of Code Day 3 with PostgreSQL

Day 4: Camp Cleanup

This puzzle was all about overlapping ranges of numbers. I had recently been reviewing the comparison operators for both arrays and range types which made this puzzle pretty easy to solve once the data was prepared. For the fun of it, I did it with both arrays and range types.

Advent of Code Day 4 with PostgreSQL

Day 5: Supply Stacks

This puzzle entailed iterating through instructions to move items between stacks. I knew this would probably entail using a recursive CTE without the ability to use a secondary language with loops and conditionals. And, to be honest, recursive CTE’s always make my brain work harder than they should, but after some work I was able to make it happen.

In the end, I chose to only use an array of strings to move items around the stacks which honestly made it a bit harder to get to my solution. After the fact, I saw that others in the community quickly created a JSONB object from the data so that they could use the jsonb_set function which was a bit more clean.

Because I had to put some mental effort into creating the recursive CTE correctly, I spent ~10 minutes in my video recap for this day talking about how recursive CTE’s work and how it helped me solve the day 5 puzzle.

Advent of Code Day 5 with PostgreSQL

Key Takeaways

I’m happy that I saw others in the PostgreSQL community mention Advent of Code. The opportunity to think about data in a new way and think about how to process and query it without variables and traditional procedural conveniences has been fun (if not difficult a few times) and displays how powerful PostgreSQL and SQL are.

I can’t wait to see what the next 20 challenges have in store… and what I’ll inevitably learn through the process.

PostgreSQL Community: Passionate, Knowledgeable, and Thoughtful

For PGSQL Phriday #003 (the last one of 2022), Pat Wright asked us to consider the PostgreSQL community, what it means to us, and how we would get started as a new user. I love this topic for a couple of reasons, not the least of which is that my interest in, and enjoyment of, databases has been impacted most by the willingness of others in various data communities to contribute their time and energy.

Community is Eclectic

Let me start with a disclaimer.

I understand that there are no one-size fits-all definition for community. Especially in a global community like PostgreSQL, we’ll each have aspects of conversation, sharing, and helping that look different. Some folks appreciate plain, direct conversation. Others prefer more context and story. Many users like in-person events with social gatherings and packed rooms. Yet other folks want the knowledge without expectations of joining the conversation directly.

While those details are important to consider, I think the real definition of a thriving community is deeper than that.

I believe that the heart of a thriving PostgreSQL community, regardless of the nuanced cultural details, consists of people that are knowledgeable and passionate about the technology, who thoughtfully help others grow in their knowledge and passion, too. When that’s the primary focus, the community grows in a way that builds investment and shared ownership. Together, everyone gets to share in the thrill of the project and the people being successful.

Or, more succinctly: Community is strongest when thoughtful people are using their knowledge and passion to encourage others.

Unfortunately, when these qualities don’t exist in a community something suffers… often the people that want to help it grow. I was reminded of what this can look like this week, and I thought it would be helpful to illustrate how similar, but unbalanced, community is when each of these qualities aren’t in place.

Signs of a Languishing Community

Earlier this week I had a brief checkup at the doctor’s office. This medical group has recently transitioned to a large, complex, albeit all encompassing patient software platform. While my check-in and check-out process wasn’t terrible, it wasn’t smooth either. It was plain to see that most of the people I interacted with had to strain through (what seemed like) hundreds of clicks to enter my name, phone, and insurance information.

I headed to the check-out receptionist when my appointment concluded. For the next 3-4 minutes she proceeded to get stuck in the process because of an error message requesting that she verify my birthdate and she couldn’t figure out how to do it. As she grew increasingly frustrated at the situation, she asked, “Would you mind going to the check-in desk and seeing if they can fix this and finish your check-out?”

“No worries,” I said,  “I’m happy to do that.”

When I approached the counter and asked if they could finish things up, this more experienced employee was done in all of 15 seconds. It was the next statement that made me think about a healthy technical community, or an unhealthy one.

Turning to the other receptionist beside her (and forgetting I was standing right there), “I don’t know what her problem was. It was so simple. All I had to do verify the birthdate on the second screen. I don’t know how she’ll ever learn if she doesn’t ask for help”.

Maybe you had to be there, but I could see immediately why nobody was asking this team for help. The first user wanted to know how to do the job, but the ones with the knowledge didn’t appear to have much passion for the subject or to thoughtful consideration for how others could benefit from their skill.

I realized in that moment that I’ve been around technical communities with a similar imbalance, but I’ve rarely stuck around long enough to see how the community fared long-term.

The PostgreSQL Community Is Different

Thankfully, that’s not been my experience with the PostgreSQL community. 🙌

I’ve detailed in other blog posts and talks about my journey back to PostgreSQL  and my quest to find community. When I started asking for help, folks like Andreas Scherbaum, Alicja Kucharczyk, Pat Wright, and Dave Cramer quickly reached out to see how they could help. When I offered to volunteer with setup and room hosting duties at a few conferences this year, the organizers were very welcoming. In fact, the idea for PGSQL Phriday came out of my previous SQL Server community.

What I’ve come to realize and deeply appreciate over the last 4+ years is that PostgreSQL has a very diverse and global community, and as it grows and matures even further, it is bursting with opportunities to build upon a solid foundation.

It also feels like there was a sizeable shift during the pandemic. Although the popularity of PostgreSQL has been growing for years, the hypergrowth of so many technology companies over the last couple of years has brought a significant amount of attention and focus to Postgres as a platform.

Remember – thoughtful, knowledgeable, passionate.

Resources, conferences, Slack apps, and even helpful Wiki pages need to be referenced more often than we realize. Users need to be reminded frequently that questions from new users and veterans alike are welcome. It’s easy to forget once you’ve been using a technology for 5+ years how difficult it can be to get started. And sometimes, it feels embarrassing to ask a fundamental question if you’re considered an “experienced user”.

Where will the PostgreSQL Community be in 2023?

So what does that mean for the PostgreSQL community in 2023? Personally, I think between events, the exciting pace of feature development, numerous community initiatives for education and learning (and the sheer popularity of the platform), we’ll see significant growth in developers, DBAs, and “second career” folks looking to see how we function as a community. I’m excited for what they’ll find. 😉 (To be honest, I’m 4+ years into this and I still feel like I’ve just started to scratch the surface!)

With that in mind, here’s a list of some of the ways I hope to connect with thoughtful, knowledgeable, and passionate PostgreSQL users in the PostgreSQL community this coming year. I’d love to hear your ideas and feedback, too.

Events

There are certainly far more events than what I’ve listed. These are just the mostly “don’t miss” events from my perspective given what I know about who runs them and the quality of PostgreSQL content. (Disclaimer: I’m woefully disconnected with groups and events outside of the US and EU, although I’m hopeful that will change in the coming years)

US Events

EU Events

Blogs, Podcast, and Other Resources

Chat, Social, and Mailing lists

It’s Always About the People

Let me finish by repeating once again that the community exists only because of the people. Postgres, the database technology, would have died on the vine 26+ years ago if members of the community didn’t commit significant time and energy to make it what it is today. But the project couldn’t keep growing if  they didn’t continually pass on their knowledge and passion to the growing community.

If you use PostgreSQL, you can get involved, learn from these fine folks, and start giving back to others around you. Write a blog post, come to a conference volunteer to do one thing, check out Slack and answer a question, or simply provide encouragement to the community that’s shepherding PostgreSQL forward.

If you use PostgreSQL, you’re already a part of the community and I can’t wait to get to know you better!

Adventures in PostgreSQL Backups

For PGSQL Phriday #002, Andreas Scherbaum asked us to consider PostgreSQL backups, how we do them, any recommendations we have, and whether pg_dump is a backup tool.

To be honest, I feel pretty unqualified to talk much about backups in Postgres, partially because it’s been a couple of years since I’ve had to manage the uptime of an application. As PostgreSQL has grown in popularity and usage, there is a lot of effort being put into tools like pgBackrest, pgBarman, and some newer solutions like pgcopydb (ok, I realize this isn’t billed as a “backup” tool, but…).

What I can share are a couple of things I’ve learned about restoring clusters and databases, particularly given the extensible nature of Postgres and the variety of tooling.

Which is a good segue to mention…

Restores > Backups

I think this goes without saying on most cases, but if I’ve learned anything by parenting six kids, repeating the basics (almost) never hurts. 😉

Backups are only as good as your ability to restore the data in the business appropriate timeframe.

It’s really that simple. No matter how much you work to dump the data or backup the data files in the appropriate way, if they can’t be restored it was all an effort in futility.

It doesn’t matter if the tool is pg_dump or pgBackrest. If you can’t get back to a state that is appropriate for your business or application, it doesn’t matter. Knowing how to use the tools requires practice, patience, and knowledge of the potential pitfalls.

What Counts as a Restore?

For most of us, talking about “restores” usually means the data only, and maybe security/users included. In my experience there’s one other aspect of Postgres that can significantly impact the restore of your data and the smooth recovery of your application.

Extensions! 😱

One thing that most users don’t take into account when they restore data is that extensions are not backed up with your data. Regardless of how you do the backup (again, pg_dump or a file-level backup), that’s just (usually) the schema and data of your database. If you installed some extensions that are necessary for your application to work, the actual extension files are not backed up. This presents a few problems if you didn’t plan ahead and are caught in a bind.

Extension Availability

First, if you restore your database to a new server and those extensions aren’t installed or available in the contrib packages that ships with the core PostgreSQL installation, you’re application might not work even if all of the data was restored.

Let’s say your application relies on a specific Foreign Data Wrapper, maybe the redis_fdw. If that extension isn’t installed on the new server when you restore the data, your application probably isn’t going to work even though all of the data has been restored.

This is because the command to install the extension is backed up (and in some cases configuration information for the extension to run normally), but when the restore executes something like CREATE EXTENSION redis_fdw;, you’re going to get an error. Assuming the restore continues and all of the data still gets restored, the extension itself still isn’t available.

Therefore, all of the clusters that you want to use as destinations for restoring data must have the same extensions installed and ready to go. Again, if you’re only using extensions that are provided as part of contrib, you’re probably OK most of the time. But due diligence is required when your application and users rely upon you being responsible with their data and availability.

Extension Versions

To make things even more complicated, the version of the extension is important to consider, too. Depending on what the extension is and the functionality being used, you are generally responsible for making sure that the correct version of the extension is available. This is easy to see in a textual pg_dump backup. The CREATE EXTENSION... command doesn’t specify the version, only that the extension needs to be created.

When an extension is upgraded between versions, various scripts can be run if there are catalog or metadata to modify. For example, if my application relies on version 1.0 of an extension and I restore to a server that has version 1.5 installed instead, my application might not contain the various schema changes necessary for that extension to work. That usually means bad things when I try to query data or use (deprecated/deleted) extension functionality.

I don’t want to make too big a deal of this. In many cases, extensions don’t update very frequently and new versions might not have made changes to functions, data, or catalog settings. But any time you rely on a new feature or setup, this is a notable issue to consider.

In short, if you are responsible for data integrity among multiple PostgreSQL clusters, ensure that all extensions are updated to the same versions and document it somewhere along with a changelog. This is helpful for you to remember the state of your database and for others in your organization that need to help maintain the database.

Selective Backup and Restore

The other thing that is particularly helpful with pg_dump and pg_restore is that there are many options to do selective operations.

  • Need to backup everything but that really large table only used for transient ETL data? pg_dump... --exclude-table=large_table
  • Are you trying to create a baseline script for a new database DevOps tool like Flyway and only require the database schema? pg_dump... --schema-only
  • Will this file be used for demonstration purposes and you can’t guarantee the same roles are available on the destination server? pg_dump... --no-owner
  • Conversely, did you receive a backup script and keep getting “role does not exist” warnings when you try to restore it? pg_restore... --no-owner

There are so many runtime options and permutations for each of these tools, there are many examples we could give. Suffice it to say, there’s a lot of power in using pg_dump and pg_restore for doing daily development work and helping other PostgreSQL users try to solve problems. Learning how to use both tools effectively is helpful for any PostgreSQL user.

Bonus: Is pg_dump a Backup Tool?

But seriously, I’m not sure I really want to wade into this discussion. At face value, yes. pg_dump produces a transaction safe backup of whatever you told it to. Assuming you know how to correctly restore that dump file, you technically have a good, working backup.

That said, this isn’t a plan for maintaining a production system which requires fine grained control and responsive replication. And that, is an entirely different post (or series of posts).

In summary, pg_dump is technically a backup tool. But you’re going to be in a lot of pain if you try to use it for a high-availability setup. 😀

The Conversation That Never Ends

I’m excited to see what others talk about as part of this PGSQL Phriday. There are so many aspects of backup and restore that impact our daily lives. There’s no shortage of tips, tricks, triumphs, and horror stories in this area of PostgreSQL database management – and I think we could blog about it every month for a few years and still not cover everything.

Hopefully, shedding a little light on how extensions require just a little bit more attention as part of your backup scheme is helpful. Again, it all comes back to a successful restore, and extensions are part of that success.

Should you attend PASS Data Community Summit this year? It depends.

In the technology and database world there’s one timeless response, provided time and again to nearly any form of the question, “should I…?”.

“It depends.” (insert obligatory chuckle from the audience…)

  • Should I use a CROSS APPLY or LATERAL JOIN to solve my query problem? It depends.
  • Should I partition this table to improve parallel queries and ingest? It depends.
  • Should I move to managed instances in cloud X to solve my scaling problem? It depends.

What about your career? With conferences coming back in full force, should you take the step to attend a premier conference, make connections, learn from the best, get an edge heading into the coming year?

Well, it depends – on your career goals, job responsibilities, interest in cutting edge content, and desire to learn best practices from the creators of the features and tools you use every day. It might depend on how energized you get by connecting IRL with the larger data community after a three-year hiatus.

For me, these tangible conference experiences make a tremendous difference to my focus, skills, and relationships. Let me give you an example.

PGConf.EU 2022

I just returned from my first European trip in a long time. PGConf EU was held in Berlin from October 25-28, 2022. Like the PASS Data Community Summit, and nearly every other conference on planet Earth, this was the first time the wider PostgreSQL community in Europe was together, in person, since 2019.

🎉 It sold out a month ahead of time. 🎉

I don’t think this was an anomaly.

Over three days I got to shake hands and engage in meaningful conversation with more than 20 virtual friendships that started during the pandemic! ❤️ I got to attend sessions lead by the core PostgreSQL team as they explained foundational concepts and new feature improvements. 🎓 And, I learned more by hearing the questions others asked in real time. 👂

I came away encouraged, thankful for the opportunity, and energized to continue investing in the community and the project. 🐘

The PASS Data Community Summit provides that same opportunity at a larger scale!

400+ Sessions and 3-month Recording Access

For deep-dive sessions, there are 18 different full day pre-conference sessions. I think the introduction to PostgreSQL for SQL Server Developers and DBAs is going to be pretty 🔥, but the other 17 are equally 🥇!

The main three-day conference runs in-person and virtually from Wednesday-Friday, November 16-18. I can’t wait to be in Seattle with thousands of data professionals, listening to compelling keynotes, having access to hundreds of live sessions, attending social events, and spending time in the “hallway” track seeing old friends and making new ones. Plus, you’ll get access to all pre-recorded, virtual sessions that are live right now!

If you haven’t signed up yet and still think you could get to Seattle two weeks from now, I know it will be worth the investment!

A Ground Breaking Virtual Experience Also!

If you can’t be there in-person, a virtual pass will give you all the same access to teaching and content from the comfort of your own home. Unlike the virtual sessions of the last two years, however, there is one major difference with PASS Summit. As you’re watching the session, streamed live from the conference, you’ll see the presenter and a live audience talking and interacting with one another… and you can join in too! 🎉🎉🎉🎉

Finally, a virtual conference that lets you feel like part of the conversation and crowd!

I think it’s fantastic and I’m excited to see how speakers and attendees alike respond to this mix!

Will You Join Us For PASS Summit This Year?

There’s really no reason to miss the excitement and world-class content. I think it depends (😉) only on whether you’ll be able to make it in-person or online.

PostgreSQL at the PASS Data Community SUMMIT 2022

Posted on Categories PASS Summit, PostgreSQL, SQL ServerLeave a comment on PostgreSQL at the PASS Data Community SUMMIT 2022

This year at PASS Data Community Summit, I’ll be presenting a full-day precon about PostgreSQL.

Why is there a PostgreSQL precon at an event primarily focused SQL Server & the Microsoft Data Platform?

Read along to find out more!

In just under a month, I’ll be in Seattle preparing for the PASS Data Community SUMMIT 2022. This is the first in-person Summit since 2019, and the first ever SUMMIT to offer both in-person and virtual attendance. In many ways, this is a homecoming for the #sqlfamily just as much as it is the premier data event for the growing Microsoft data platform. 🎉

I’m honestly thrilled to be a part of the event, especially because I get to offer a full-day precon session about PostgreSQL, the most advanced open-source relational database in the world… and one that Microsoft, Amazon, Google, and countless others continue to invest significant resources into.

Why is there a PostgreSQL precon at an event focused primarily on the Microsoft Data Platform… and why should you consider attending the training and PASS SUMMIT?

I’m glad you asked. 😉

A SQL Server Developer Walks Into a New Job

I’ve been working with data and software development for nearly 20 years. The first two years I built applications with open-source databases, primarily MySQL and PostgreSQL. For the next 14 years I worked for one software company that exclusively used SQL Server and .NET for all development. It was here that I got to know about PASS, SQL Saturday, and the larger #sqlfamily.

But then, a little more than four years ago, I joined a new company to help grow the data analysis team. Between the time I accepted the job and my first day, someone forgot to tell me that they were migrating away from SQL Server to Amazon RDS PostgreSQL.

That was a rude awakening on my first day.

I had to quickly figure out how to do even the most basic things with PostgreSQL that were so easy with SQL Server.

  • What IDE could I use without SSMS?
  • Why can’t I just select the TOP(1) record from a table?
  • How do I set (and use) a variable in a script?
  • It’s 2018 and the best thing I can get to tune my query is this text dump from EXPLAIN and are you kidding me this is torture!!!!!!!! (only slightly paraphrased quote)

I started to quickly find success as I asked for help, applied principles from my years with SQL Server and all that I learned from #sqlfamily, and eventually… by not constantly wishing everything was like SQL Server.

The more I learned, the more I grew to love PostgreSQL and the community that surrounds it.

Ever since, I’ve been on a bit of a mission to pass on as much of that knowledge as I can… even though I know there’s still so much more I have to learn.

From PostgreSQL Newbie to Knowledgeable Hacker

The training course is intended to be a start to finish overview of the major parts of PostgreSQL and how they relate to the experience you already have as a SQL Server developer or DBA. While you’ll have the best experience following along inside of PostgreSQL yourself, you will still learn a lot (and walk away with all the training scripts and slides) by attending.

The expected outline of the course will look like this. Because it’s difficult to determine ahead of time the level of experience that everyone has coming into the course, this may be adjusted this slightly as we go, but again, any of the prepared material we don’t cover fully will still be provided and I’m happy to discuss it with attendees afterwards.

  1. Connecting to PostgreSQL and running your first SQL – I want you to experience PostgreSQL as quickly as possible. We’ll discuss a few good options for connecting to PostgreSQL quickly, run a few scripts, and talk about what we learned.
  2. Installation options – While it is assumed that attendees will have ready access to a PostgreSQL server to start the session, we will discuss the various options for installing a cluster. Remember,  PostgreSQL is an open-source project which doesn’t have a single company managing it like a commercial product providing finely tuned GUIs for all of your settings.
  3. You’re Not in Kansas Anymore – Before we dive in too deeply, we’ll take a few minutes to talk about the major differences between SQL Server and PostgreSQL that are likely to surprise you. Knowing is half the battle. 😉
  4. Configuration – Like any complex database solution, there are many configuration settings. Knowing which ones to look at first, where to find help, and how to test them can mean the difference between fast queries or a dead server.
  5. Intro to psql – Many of us have grown to love a well-designed GUI for interacting with our database. It took me too long to realize that understanding `psql` and learning the basics made me a better PostgreSQL developer and administrator. This could be a half-day course unto itself, but you owe it to yourself to become familiar with this 🔥 tool.
  6. Hands-on SQL comparison – Now that you’ve gotten connected and run some basic SQL, we’ll take time to look at many of the first “gotcha” moments writing queries in PostgreSQL. By the end of this section, you’ll be ready to read and write many of your common SQL pattens in a new way that works with PostgreSQL.
  7. Enhancing PostgreSQL with Extensions – Extensions are the magic power of PostgreSQL and one of the primary reasons so many companies are investing heavily in the platform. We’ll look at a few of the must-have extensions that are available with most PostgreSQL offerings.
  8. Indexing – Indexing is indexing… until it’s not and you have access to multiple datatype specific index types. 🤯
  9. Query Tuning – We can’t hide from it any longer. It’s time to dive into the world of EXPLAIN and query tuning in PostgreSQL. There are multi-hour classes available on this topic alone, so we can’t dive too deep, but you’ll know what to do with the wall of text and how to take next steps to improve your queries.
  10. Functions and Stored Procedures – Once again, they’re the same… but different. And functions aren’t always told they’re bad and to be avoided at all costs. They actually get invited to parties. 😊
  11. Triggers – Like functions, triggers aren’t shunned at all costs. How they work is similar (but again… different… and maybe better) and PostgreSQL developers believe they have a valuable purpose that doesn’t have to be a hidden menace.

🎉 BONUS section 🎉

No training would be complete without a review of how to get help and connect with other developers. Although we’ll certainly talk about community throughout the day, I want to take a few minutes at the end of the session to point you in the right direction to continue learning about PostgreSQL and continue being successful when the time comes.

What We Won’t Cover Directly

I know a full day of training feels like we should be able to cover everything. I must keep reminding myself that some subjects are just too deep and specific for the first into most users will have to this new (to them) database server.

After receiving some questions about the course recently, I changed the title to more clearly align with the description of the course (and the content outline above). That is, the course will more closely align with users that are database developers, architects, and “accidental DBAs”.

Therefore, this time around, we will not specifically cover some deeper-level topics that would be labeled more classic “DBA” tasks including:

  • More complex backup and recovery for enterprise installations
  • HA and replication setups
  • Upgrades from one version of Postgres to another
  • Security and access deep dive
  • Deeper database maintenance scenarios

I have no doubt that some questions and side discussions will hit on aspects of these topics, but we won’t do any hands-on work here specifically. Maybe that will be part two for next year. 😀

Please Join Us!!

I hope that this precon is helpful to many folks attending SUMMIT. Coupled with the other fantastic precon and 300+ regular sessions, there is so much to learn at the PASS Data Community SUMMIT this year as we have the opportunity to come together for the first time in three years.

Please join us, either virtually or in-person if you’re able! I have no doubt it will be the highlight of your year.

See you November 15-18!

PGSQL Phriday #001 Wrap-up and review!

As I sit here on October 7, 2022 watching PostgreSQL friends from all around the globe post contributions to the first ever PGSQL Phriday blogging event, I’m honestly pretty shocked… and very (very, very, very) grateful! While I have lots of ideas and love connecting with people to hear their stories, I wasn’t sure what to expect because, let’s face it, there are so many demands for our time and attention.

The fact that many folks have been supportive of this idea and contributed to this first event truly warms my heart. Thank you each for helping to get this ball rolling!

Recap: How good is the Postgres community at lying? 😉

Quick note to future hosts: You don’t have to give personal background on every contributor. That’s just how I think about these interactions. Feel free to just give a brief summary of the post!

Lætitia Avrot – Lætitia has been one of my go-to sources for learning psql better as I’ve grown in my PostgreSQL journey. (she has lots of other 💯 content and presentations too!). So, it comes as no surprise that for the inaugural PGSQL Phriday Lætitia would give us some nuggets about the beloved command-line tool, with a slight twist on the “two truths and a lie” twist. 😂 Specifically, I really appreciated her second tip that points to repeatability when collaborating with others.

Lutz Horn – One thing that I’ve grown to love with the PostgreSQL community is the diverse avenues of communication. Lutz submitted his post through the pgsql-general email list and I am so thankful he reached out! Lutz’s post talked about three features of PostgreSQL (ON CONFLICT...DP UPDATE, table inheritance, and CTE’s) that he uses often… or hopes to use more often… in his work with PostgreSQL.

Grant Fritchy – I’ve known Grant’s work and teaching for many years in the SQL Server world. It’s been fun to see him learn more about PostgreSQL and relate to his expertise with SQL Server, not that dissimilar to my (ongoing) journey these last 4+ years. Grant took the this opportunity to talk about backing up and restoring databases in PostgreSQL, including the introduction of a new PostgreSQL commands I’d never heard of… 😉

Andreas Scherbaum – Ads is well know to the PostgreSQL community and has been a consistent source of help and connection for me personally. He faithfully helps with multiple PostgreSQL conferences, teaches about multiple topics, and continues to post “PostgreSQL Person of the Week” interviews to help highlight members of the community. The first presentation I ever saw Andreas give was about PostgreSQL deployments with Ansible and so I was happy to see him provide some guidance on configuring dates, backups, and roles in PostgreSQL.

Andy Atkinson – I had the pleasure of meeting Andy at PGConf NYC 2021, in that magical time just after Thanksgiving and the third week of December 2021 when a few in-person conferences were able to meet before things shut down again for a month because of COVID spikes. He’s an active Ruby on Rails developer and speaker and so his post looks at three ways to improve a Rails application that utilizes PostgreSQL as the database. If you’re a Rails developer, you should bookmark Andy’s site and Twitter account to learn more about developing great PostgreSQL apps on Rails.

Pat Wright – More and more folks in the U.S. contingent of PostgreSQL users have seen Pat become more involved in the community and conferences as the world continues to open up again. He’s an avid conference organizer in many communities and has such a great vision for helping the community grow. In his day job Pat has to manage a lot of database infrastructure, including HA and replication. This post focuses on a few important settings to make sure replication is working well and what to look for when things are going as expected.

Conclusion

As I said at the beginning, I count six blog posts as a huge success for this first PGSQL Phriday. Next month Andreas is going to host and will probably be asking for content around backups (although he’s totally free to change that!).

I hope you’ve learned something and will consider taking part in the future with your own blog post.

PGSQL Phriday #001 – Two truths and a lie about PostgreSQL

Welcome to the inaugural invite of #PGSQLPhriday, a new, monthly blogging event for the larger PostgreSQL community! Each month, a different member of the PostgreSQL community will act as the host, choose the topic, and write an invite post similar to this. Once the blog posts are contributed, the host will write a wrap-up of all the contributions. What a great way to share your knowledge with other PostgreSQL users!!

(The topic for this first event is explained a bit further 👇, after a brief detour to discuss the last-minute name change.)

Hold on, I thought this was #PSQLPhriday?

OK, you caught me. Probably one of the worst things to do on the inaugural invite is to change the event’s name. 😬 The truth is, since announcing the idea, a few different community members expressed concerns about “psql” (rather than “pg”) and “phriday” (rather than “Friday”). They’re all valid concerns, and it’s hard (for me) not to want to accommodate everyone.

Because this idea is taken from the SQL Server community and #TSQLTuesday, I wanted to give a nod to that long-running event, and “psql” (no “G”) seemed like a natural fit. But “psql” is a bit overloaded too because of the 💯 command line tool of the same name, so there was a small chance that name could cause some confusion.

And so this weekend, as I debated what to do, I realized that all of the official PostgreSQL mailing list names used “pgsql” as the prefix. Sticking with this same prefix still allows the event to (subtly) give a tip of the hat to the original TSQL Tuesday event while still pointing to the decades of hard work and collaboration that has occurred on the mailing lists, a place where so much history of Postgres is documented and available.

And so, #PGSQLPhriday it is. No exchanges this time, I promise.

Thanks for understanding!

Choosing a topic for the inaugural event

I’m not going to lie. It’s been challenging for me to pick the first topic to get this party started. I seem to gravitate to topics that are too broad (“Why do you love Postgres 15?”), or too narrow (“Tell us about your best contribution to PostgreSQL”). In the original announcement post, I even hinted that the topic would simply be “Why Postgres?”

I also realize that this first event comes at a unique time in the PostgreSQL lifecycle: many folks just attended PGConf NYC 2022, PostgreSQL 15 is being released on October 13, and PGConf EU is October 25-28 in Berlin. A lot is happening right now, and any of these special milestones could be inspiration for an interesting topic. (“Which PostgreSQL 15 feature are you most excited about?”, “What have you enjoyed most about attending in-person conferences again?”)

But as I contemplated the topic (for far too long…), I recalled one of the more unique talks I was able to attend at PGConf NYC given by Ilya Kosmodemiansky, “PostgreSQL Worst Practices.” In his presentation, Ilya provided insights into many configuration and development issues folks run into with PostgreSQL by admonishing us to do the “worst” practice. (ie. “Absolutely don’t take regular backups of production, but if you do, only use pg_dump”) It was a clever and humorous presentation that provided good laughs and great dialog about true best practices.

So that got me thinking about some clever ways we could share tried and true PostgreSQL development and configuration tips with one another. If you spend any time on Twitter, the mailing lists, or Postgres Slack, there is no shortage of questions about common configuration and development best practices. While some folks have written blog posts to address these common questions over the years, there is always room for others to contribute.

Your mission, should you choose to accept it…

For this first event, I want you to write a blog post that shares three of your go-to Postgres best practices, but write it in the form of the game Two truths and a lie.

For example, perhaps there are three configuration settings you always modify whenever setting up a new cluster. You could share why these settings are important to modify and how you find the correct values, but for one of them, use Ilya’s comical approach to teach us the correct way to set the value. (ie. “I never set shared_buffers to more than 128MB to conserve server resources.” 😱)

Maybe you have three go-to ways of monitoring your cluster or multiple SQL options to overcome a tricky query problem many PostgreSQL developers face. You’re limited only by your imagination.

And obviously, at the end of the blog, be sure to clearly identify which of the three tips was a lie and how someone should actually modify the setting or write that tricky SQL statement. 😉

How to contribute your blog post

There are only a few “rules” that we ask everyone to follow when writing a blog post to contribute each month.

  • Use “PGSQL Phriday #001”/”#PGSQLPhriday 001” in the title or first paragraph of your blog post. (the number will change each month)
  • Link back to this invite. (ie. “For #PGSQLPhriday 001, Ryan asked us to…”). If you don’t link back to the invite, you won’t be included in the summary list of blog posts.
  • Post your blog sometime on Friday, October 7, 2022. The world is a big place, so if it’s a little early or late… it’s probably still Phriday somewhere! 😃
  • Announce your blog post in at least one of the following places:
  • Read other blog post contributions and encourage the authors. Remember, the goal is to increase community involvement and knowledge sharing

I can’t wait to see what you contribute on Friday, October 7, 2022! 🎉🎉🎉

PGSQL Phriday: A Monthly Blog Event for the PostgreSQL Community

Not quite a year ago, I had the opportunity to give a keynote address at PGConf NYC 2021 as part of Timescale’s sponsorship. Thankfully the leadership at Timescale supported me addressing the topic of community and how, particularly in light of the upward trend of PostgreSQL adoption, investing in the growing community can help everyone involved.

In that talk, I drew on a few of my experiences as a SQL Server developer, specifically within the PASS and #sqlfamily community. Although I understand that each community has its ethos, I wondered aloud if some of the initiatives that had drawn me further into the SQL Server community could provide ideas on ways to engage the growing PostgreSQL user base.

One of those initiatives was focused on the monthly T-SQL Tuesday blogging event started by community members more than a decade ago. The long-running event is simple.

Each month a community member volunteers to host the event. They select a topic and “invite” anyone to participate through a short blog post addressing the topic. Then, on the first Tuesday of each month, anyone in the community can post their own blog about the topic, linking back to the original invitation blog. The main purpose of the monthly event is to provide a safe, open opportunity for anyone to blog with a group of community members. For many people, the biggest hurdle to writing down and sharing their experience with a specific technology is selecting topics… and believing that anyone cares about what they have to say. T-SQL Tuesday has started many successful blogs over the years because someone had a good prompt and others received encouraging feedback about what they shared. 

As more people got involved, the community grew, spreading the knowledge and support of the community across a larger, more close-knit group.

I don’t know about you, but that sounds 🔥!

What does the PostgreSQL community think?

In the 2022 State of PostgreSQL survey, short and long-form blog posts were among the top three ways users prefer to learn about PostgreSQL! Users want to learn from the experiences of others and how they overcome technical obstacles with PostgreSQL. This seems to echo a recent blog from SQL Server consultant Erik Darling about his frustration with technical books, specifically regarding readability and code examples. Blogs (and other online forums) often have the least amount of friction in unlocking reader engagement.

In short, there are a lot of people in the PostgreSQL community that are looking for the knowledge and experience you have. It just has to be written down and published. Most of us (or at least me) need that consistent prompt to keep going.

The Key Ingredient: Volunteer Hosts

Very little coordination is needed to run a monthly event like this. Folks in the community can contribute a blog post when they’re able and there’s no penalty for missing a month. If someone posts a few hours early or late, that’s OK. The desire is to foster and encourage more participation because we believe it will strengthen the PostgreSQL community.

The only thing the initiative needs is a volunteer host that changes each from an individual in the community.

The responsibilities of the host are pretty simple:

  • Host blogs must be “syndicated” through Planet PostgreSQL. If it is not, follow the process a few weeks ahead of time and reach out with any questions!
  • Choose the topic for the month and write a short invitation blog post about it.
  • On the Monday before the first Friday of the month (which is sometimes the end of the prior month!), publish your blog post which should publish it through Planet PostgreSQL. Additionally, please send a reminder on social platforms that link to the invitation and include the #psqlphriday hashtag.
    • At a minimum, post to Twitter, PostgreSQL Slack, and the `psql-general` email list
  • Within a few days of the monthly blog event (~the following Monday or Tuesday), compile links and brief reviews of the “submissions” and post the summary to your blog.
  • Ensure there is a new volunteer host for the following month.

The topics can cover anything about, or related to, PostgreSQL. Hosts might often prompt with technical topics, sometimes more general database/data questions, and still other prompts might focus on soft skills for a successful technology career. Example invite posts might include some of the following ideas:

Technical:

  • “Why you love extension x”
  • “Your favorite feature of PG15 and why”
  • “How logical replication saved your bacon”

General PostgreSQL/database:

  • “What is your favorite tool that’s not psql and why?”
  • “What are your favorite queries for monitoring database performance?”
  • “What feature did you learn about reading the documentation that changed your approach?”

Soft topics:

  • “How have you contributed to building the PostgreSQL community this past year?”
  • “What’s one skill every PostgreSQL developer should know or grow in this coming year?”
  • “What was the first talk you gave about PostgreSQL and what did you learn that would encourage others to give talks too?”

That doesn’t look too hard, does it? Do you think you’d be interested in participating or hosting?

Could the PostgreSQL community try #PGSQLPhriday?

What are we waiting for?

When I brought this idea up at PGConf NYC last year, several folks jumped on board with the idea. The fact that it’s taken almost a year for me to create this post and get the ball rolling is mostly due to large doses of imposter syndrome. (Sigh…)

But I’m more bullish on the PostgreSQL community than ever, and I’m ready to dig in and see how I can help more.

Therefore, I’m proposing that we start with the first blog event on Friday, October 7, 2022. This falls between PGConf US and PGConf EU, and hopefully, within a week or two of when PostgreSQL 15 is officially released.

The basic ground rules we’ll encourage everyone to follow are minimal:

  1. For the best audience reach, work to get your blog listed at Planet Postgres before posting your first PostgreSQL blog. Ensure your PostgreSQL blog adheres to the policies and then follow these steps.
  2. Post your blog any time on the first Friday of the month and link back to the host’s topic announcement blog post for reference. We’ll create a logo you can include and link back to the host post. More to come. (wink)
  3. Using your social platform of choice, mention your post and include the #pgsqlphriday hashtag. If the host misses your post in their wrap-up, assume that it simply got lost in the social media deluge and just reach out and give them a link and thank them for hosting! (smile)
  4. Be encouraging and respond in kind to other folks that contribute posts to the event!

Pretty easy… right?!

Ready… Set… Blog!

The volunteer hosts for the first few months are already lined up and listed below, along with names of community leaders that have agreed to host starting in 2023. If you’d like to lead a month in the future, reach out to me through Twitter, the PostgreSQL Slack, or email. I’ll do my best to keep things flowing as we get this party started!

DateHostProposed Topic
October 7, 2022Ryan BoozWhy PostgreSQL?
November 4, 2022Andreas Scherbaum Upgrade PostgreSQL
December 2, 2022Pat Wright
January 6, 2022Henrietta Dombrovskaya
Initial hosts for the first four PSQL Phriday events!

Upcoming hosts: Ryan Lambert, Grant Fritchey, Michael Christofides

Be on the lookout for the first intro post on October 2! Until then – #LongLivePostgres 🐘

P.S. A special shoutout

Before signing off I wanted to make sure that I gave a shoutout to Shaun Thomas from EDB. When a few community members came up with the “PGSQL Phriday” name, we didn’t know that Shaun had already been blogging for a few years with a similar heading, “PG Phriday”. When I reached out to see how he would feel about us using the name, he was quick to say, “No problem! I’ll probably join in.”

Thanks for your willingness to let us try to grow this a little bit more, Shaun, and get more of the community involved.