Vertabelo’s new SQL Script tool lets you write and save SQL scripts and run them against a connected database.Creating an SQL ScriptStart by clicking on theCreate Documentbutton. It’s the first one on the main toolbar.TheNew Documentwindow appears. Now selectSQL scriptand click on theCreatebutton.TheNew SQL scriptwindow will be displayed.Name your script using theNamefield and click theCREATEbutton.Your new script will be opened in the workspace as empty file. The name of your script is visible in the top left corner of the toolbar. You can see a list of all your scripts by clicking on the drop-down menu arrow to the right of the current script name.
No matter which side of the equation you’re on, sometimes it’s tough to find a qualified person for a specific job. In this post, we look at a data model to help recruiters and HR departments stay organized during the hiring process.Most of us have been involved in the hiring process – most often as the job applicant. However, we can also find ourselves involved on the hiring side, maybe by testing the applicant’s technical knowledge. The recruiting process takes a certain amount of time, and the group of applicants grows continuously smaller as we get closer to the final decision. The result should be the selection of the best person for the job.
If you were a fan of Linux and SQL Server, you basically had to have two operating systems at all times. But now, Microsoft has made a Linux version of their relational database management system. How does it work, and how well does it perform?Microsoft products used to be Windows-centric, but in recent years Microsoft CEO Satya Nadella has started moving the company towards cloud-based and open-source projects and enabling their software to run on other platforms. Because of this, running SQL Server, Microsoft’s RDBMS, on a Linux system is now a possibility.
Renting goods and services is very popular today. Services like Airbnb rubbed the renting lamp and let the genie out, especially for travelers. They’ve opened up new horizons, and in the future we can expect that all kinds of rental services will become even more common.In this article, we’ll describe a database model that could be used to run an application for renting apartments, rooms, and anything else you can think of.
Booking a doctor’s appointment using an online app is an innovation that simplifies the entire process. Let’s dive into the data model behind an appointment booking app.Why use an app? It makes it easier for people to find the doctors of their choice, letting them see the doctor’s professional records and patient reviews. When someone finds a doctor they like, they can book an appointment with them without leaving the app. An app can also help doctors keep their patients’ waiting times as short as possible, help them schedule their patients, and enable them to keep an eye on patients’ online reviews.
Organizing an event is a lot of work! In this article, we examine the data model behind an event organization app.If you’ve ever tried to organize an event for more than ten people (and don’t count parties or business meetings here) you know how complicated event management can be! Have we invited everyone? Have they confirmed if they are coming? Is the venue booked and prepared? Who will host the event? Who will participate in the various parts? There are many other questions to answer, and things could easily go wrong.
Other than location, what’s it take to run a successful real estate business? We examine a data model to help real estate agencies stay organized.Buying, selling, and renting apartments or houses is really big business today. Most people are happy to pay a fee and let a professional real estate agency do the work for them. On the other hand, the company could act in its own behalf, buying properties to resell or rent. A real estate company may also lease a property then rent or sublease it and make a profit on the difference.
What is MySQL partitioning? What kind of partition types are there? How do you know if this is something your database engine supports? In this article, we tell you what you need to know about partitioning in MySQL.MySQL partitioning is about altering – ideally, optimizing – the way the database engine physically stores data. It allows you to distribute portions of table data (a.k.a. partitions) across the file system based on a set of user-defined rules (a.k.a. the “partitioning function”). In this way, if the queries you perform access only a fraction of table data and the partitioning function is properly set, there will be less to scan and queries will be faster.
Being healthy and fit is a lifestyle, not a fad. People who realize the value of health make it a priority, keeping records of all their fitness-related facts. In this article, we’ll examine the design of the database behind a health and fitness application.There are many applications which let users log their health and fitness information. A couple of big players like Apple, Google, and Microsoft have launched their own development APIs specifically for this market. For example, Google has ‘
The first two articles in this series highlighted SQL Server’s ability to organize data into user-defined windows and its aggregate functions . Part 3 will focus on other methods of aggregating and organizing data using built-in SQL Server features – specifically, its grouping functions.For consistency, the same base data will be used as in the first two parts of this series.SQL Server’s Grouping FunctionsThe GROUP BY clause is the basis for SQL Server’s grouping functions: ROLLUP, CUBE, and GROUPING SETS. At its simplest, GROUP BY summarizes rows based on user-expressed conditions:
What kind of data model can handle all the planning and activities used in process management? In this article, we discuss one design for a process management database.Process management is a fairly straightforward and common concept. At its core, process management is simply deciding what needs to be accomplished – building a car or creating an app, for example – and then figuring out how to do it. Of course, the actual process itself is more complicated! It may require special tools, charts, or software, and it will certainly require a lot of planning and organization.
In this era of tough competition, job portals are not just platforms for publishing and finding jobs. They are leveraging advanced services and features to keep their customers engaged. Let’s dive into some advanced features and build a data model that can handle them.I explained the basic features needed for a job portal website in a previous article . The model is shown below. We’ll consider this model as a base, which we will change to meet the new requirements. First, let’s consider what these requirements (or enhancements) should be.
Bus and train travel hubs are full of activity: the crowd, the rush, the lines, the race to the platform or terminal. Clearly, such places require a lot of organization! In this article, we’ll describe a database model that could keep a transport hub organized.And that’s not easy. There are many parameters to account for: lines, stations, passengers, tickets, compositions (i.e. buses or trains), and the number of available seats on any given trip. Plus, before selling tickets or doing any similar action, we need to be sure that the result is the desired one.
What kind of database model does it take to run a gallery or museum? How can it be optimized to manage events, partnerships, and other activities?When I think of a gallery or museum, I usually think of a peaceful place where you can hang out for hours, looking at interesting or beautiful things. Personally, I enjoy visiting the Technical Museum in Zagreb, which has cool exhibits like old cars, fire trucks, submarines, trams, and trains.
In this article, I’ll walk you through some fundamental considerations for working with date- and time-related data in MySQL. We’ll also look at how to handle multiple time zones and daylight saving time changes.Let’s first address some core concepts that will help us understand the underlying complexity of time-related data. It is important to notice that these concepts apply whenrepresenting a point in timerather than an absolute duration. Introduction
SQL Server R Services combine the power and flexibility of the open-source R language with enterprise-level tools for data storage and management, workflow development, and reporting and visualization. This article introduces SQL R Services and the R language.What Is R and How Is It Used?R is the most widely used language for statistics, data mining, and machine learning. (R is also the name of the environment and the engine that executes code written in the R language.) You need to learn how to develop R programs like you need to learn any other programming language.
When you’re using a data warehouse, some actions get repeated over and over. We will take a look at four common algorithms used to deal with these situations.Most DWH (data warehouse) systems today are on a RDBMS (relational database management system) platform. These databases (Oracle, DB2, or Microsoft SQL Server) are widely used, easy to work with, and mature – a very important thing to bear in mind when choosing a platform. Although they include special features for DWH configuration (like bitmap indexes, partitioning, and materialized views) we still have to take care of data transformation ourselves.
A recurring event, by definition, is an event that recurs at an interval; it’s also called a periodic event. There are many applications which allow their users to setup recurring events. How does a database system manage recurring events? In this article, we’ll explore one way that they are handled.Recurrence is not easy for applications to deal with. It can become a hurricane task, especially when it comes to covering every possible recurring scenario – including creating bi-weekly or quarterly events or allowing the rescheduling of all future event instances.
Mastering SQL analytical functions is a necessity for an aspiring BI/DWH professional. In this article, we’ll explore the history of SQL in a BI environment. We’ll also introduce some of the more common SQL analytical functions, including RANK, LEAD, LAG, SUM, and others.SQL can be used for business intelligence (BI) applications that combine data retrieval, analysis, and computation. These computations are more complex than what we see in OLTP systems. If you’re interested in getting into data warehousing (DWH) or business intelligence, you need to go beyond plain SQL and start digging into the analytical functions (also known as window functions, windowing functions, or OVER clauses).
Whenever you need to save datetime data, a question arises about what MySQL type to use. Do you go with a native MySQL DATE type or use an INT field to store date and time info as a plain number?In this article, I’ll explain MySQL’s native options and give you a comparison table of the most common datatypes. We’ll also benchmark some typical queries and reach some conclusions about which datatype to use in a given situation.
In part one of this series on SQL Server window functions, the functionality of the OVER() clause was discussed . This article will focus on aggregate functions. The majority of these can be used in conjunction with OVER. Some, like SUM() and AVG(), are widely used. Others — including VARP() and CHECKSUM_AGG() — are not as well-known, but are potentially quite useful.OVER defines the user-specified range on which a function is applied. This can be combined with many SQL functions, allowing easy separation of data into defined windows within a single SQL query.
Two new features in SQL Server 2016 make expanding your database’s historical data storage much easier. Here’s how to implement them.Storage. It’s a problem that anyone with a constantly-growing database has to face. We maintain great volumes of data that are rarely queried by business users. Most of these are historical or versioned data that someone checks once a year to see how “product X sales looked in 2005” or “record Z has changed over time”.
PostgreSQL provides an activity-tracking module called thestatistics collector, which tracks table access and other internal events. If your database is experiencing long wait times, you may be able to use this tool and some simple SQL to find and fix the problem.Tracking Postgres Activity With the Statistics Collector ToolThe statistics collector is designed to keep records about internal activity in a Postgres database manager. It can:count table and index access in both disk-block and individual-row terms.
When you were learning database concepts, data modeling looked pretty easy, didn’t it? You knew all the rules, and modeling seemed like a game: get a challenge, do your best, and eventually solve it. Job well done! Moving up to the next level – and so on.As you continue, though, you’ll see that database modeling is also an art. Many cases require a totally new approach. Everything can be done ‘by the book’, but sometimes you get better results when you go less orthodox.
Window functions were first introduced in standard SQL 2003 and then extended in SQL 2008. Since SQL Server 2005, Microsoft has been steadily improving window functions in the database engine. These functions perform an aggregate operation against a user-defined range of rows (the window) and return a detail-level value for each row.Well-known aggregate functions includeSUM,AVG,MIN,MAX, and many others. Recent versions of SQL Server have introduced ranking and analytic functions such as
Storage engines can surprise you. For example, take the CHAR data type. It expects an exact number of characters and by definition stores afixed amountof information. However, you don’t have to fill all the available CHAR space – a shorter value will work. This is so similar to VARCHAR that I decided to explore the differences between these two types.Before diving into the details, let’s start with some basic information. CHAR and VARCHAR are SQL data types dedicated to storing character values. They are available in almost every database engine. Due to database and encoding particulars, the storage of character values in CHAR and VARCHAR columns differs.
Some years ago, when PostgreSQL version 8.3 was released, a new extension calledtablefuncwas introduced. This extension provides a really interesting set of functions. One of them is thecrosstabfunction, which is used for pivot table creation. That’s what we’ll cover in this article.The simplest way to explain how this function works is using an example with a pivot table. First, we will explain our initial point from a practical perspective, then we’ll define the pivot table we want.
Database schema migration is never an easy job. In fact, it can really be a headache, even when you’re working with a familiar system. For example, at times Oracle 10g may not drop the associated index for a primary key or unique constraint that has been dropped. In this article, I am going to explain when and why this happens.The Story:I’ve been working on the development of an e-commerce platform. Before every new platform version release, a migration script is prepared to move the database scheme and data from the old version to the new one. This is done by the database architect in his local environment. Then the migration script is tried out against a test environment that restores the production database and simulates real conditions. (Restoring the production database in the test environment is done using the Oracle 11g Data Pump tool.)
Having reference tables in your database is no big deal, right? You just need to tie a code or ID with a description for each reference type. But what if you literally have dozens and dozens of reference tables? Is there an alternative to the one-table-per-type approach? Read on to discover ageneric and extensibledatabase design for handling all your reference data.This unusual-looking diagram is a bird’s-eye view of a logical data model (LDM) containing all the reference types for an enterprise system. It’s from an educational institution, but it could apply to the data model of any kind of organization. The bigger the model, the more reference types you’re likely to uncover.
We’ve had tremendously positive feedback on my recent article that talked about “Why SQL is neither legacy, nor low-level, nor difficult, nor the wrong place for (business) data logic, but simply awesome” both within the blog’s comment section as well as on reddit .However, one of the sections triggered very controversial feedback. Clearly, not everyone agreed to:Fallacy #5: The database is the wrong place for business logicNow this is really the biggest of all fallacies. Who said that? Some guy from the 90s who had written a book on Object-Oriented Design Patterns? Of course, if you blindly follow random architecture rules, this may apply to you. But then, beware that you’re following 10-20 year-old “legacy” principles by not using SQL (see also
Sometimes, you want all identifiers in your SQL script to be quoted. For example, you want them to have a particular case. Luckily, Vertabelo allows you to quote all SQL identifiers in your generated script to prevent you from any SQL name conflicts.Go to your model: Click theSQL generation settingstab in theModel propertiespanel on the right: CheckQuote all SQL identifiers: If you want to check out the result, select any table and click the
Tony Hoare , who is mostly referred to as the inventor of the NULL reference, now calls it a billion-dollar mistake which pretty much all languages are now “suffering” from, including SQL.Quoting Tony (from his Wikipedia article):I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn’t resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.
The General Availability version of MySQL is still version 5.6, but the development release of MySQL 5.7 definitely introduces some exciting changes to the world of database management systems. Is it worth giving a try? In this article, we’ll have a closer look at a few brand-new features that may help you decide to do so.Native JSON SupportJSON (short for Java Script Object Notation) is a format for storing information which can be a good alternative to XML. If you haven’t encountered it already, take a look at a very simple example of a JSON document:
Why SQL Is Neither Legacy, nor Low-Level, nor Difficult, nor the Wrong Place for (Business) Data Logic, but Is Simply Awesome!
The following fallacies are things that I hear all the time.SQL is legacy. Why can’t we work with more modern tech?– Timeless. E.g. by someone who thinks that NoSQL databases are “modern”.SQL is low level, like assembler. Would you prefer to work with assembler or with Java? Similarly, would you prefer to work with SQL or with Hibernate?– Timeless. Someone who thinks that SQL is low-level.In this article, let me clear up some common SQL prejudices and other fallacies. SQL is neither legacy, nor low-level. Here’s why:
A common challenge for database modellers is deciding which data type is the best fit for a particular column. It is a problem which involves consideration of both the properties and the scale of the data that will be stored, and in no case is this more evident than when handling numeric values because of the large variety of alternatives that most relational databases provide for their storage.In this article we’ll explore the numeric data types available in SQL Server and analyse their advantages, disadvantages and general usage.
When designing a database, early decisions can have a huge impact on the performance and storage requirements. These decisions can be difficult to change later, as most subsequent work will depend on the physical model. This article highlights some common design decisions, flaws, and misconceptions.Creating a Primary Key: Uniqueidentifer or IntegerWhen defining a surrogate primary key for a table, two options are the most common: Integer and UniqueIdentifier (aka.Globally Unique Identifiers
The SQL Language seems easy to learn – the commands follow a simple syntax and don’t describe the specific algorithms used to retrieve data. The simplicity might, however, be deceptive. Not all the database functions operate with the same efficiency. Two very similar queries can vary significantly in terms of the computation time. This article presents some of the best practices that can greatly boost your SQL queries.1. Learn How to Create Indexes Properly
OR/M and noSQL guys see SQL as an awkward way to store and retrieve objects from the database. Here at Vertabelo, we see SQL as a useful language to query data. Making reports in pure SQL is a pure pleasure. Running an interactive query is almost the same experience as asking a human to provide information.At some point you’ll have a need to write a report in a few seconds. Running IDE and coding is not a viable option. In this article I’ll show you how to perform and format ad-hoc reports using a PostgreSQL command line client. I’ll focus on formatting options so examples will be simplified.
The concept of views and function-based indexes has been known for many years. One of the brand new solutions is a virtual column – a feature introduced in Oracle 11g. Apart from database giant, some well known DB vendors, like MariaDB and SQL Server, support the idea of computed columns. So let’s give virtual columns a try and examine their basic usage.Generally, there are two kinds of virtual columns:Virtualvirtual columnPersistentvirtual column
Learning new things should be fun. I’ve read about database triggers multiple times but I didn’t have a chance to use them. Triggers are not popular these days especially in web development. Let’s mix learning and making fun (of others).What is a trigger for? It’s a kind of aspect of programming, an extra business logic executed on a given action. With triggers you can do complex validation, update balance on summary tables, record the who, what and when of changes to data and so on. That’s triggers in a nutshell.
Many features have arisen throughout the evolution of the Oracle database. In this article, let’s focus on temporary tables. They were introduced fairly late (Oracle 8i) and are now often considered to be indispensable for DBAs and developers.DescriptionIt’s worth mentioning up front that the table itself is not temporary, but rather the data within it. The data in such a table is stored only as long as the session or transaction lasts and is private for each session, however the definition is visible to all sessions. Of course, after commit or disconnection, the data is lost but the definition of the table remains (it’s not necessary to perform many ddl operations – especially create table – which is a good practice). Likewise, other structures related to the temporary table like synonyms or views won’t disappear after the end of the transaction or session. Indexes created on a temporary table behave similarly.
Different languages have different alphabets and different ways to order letters within those alphabets. For example, a Polish character Ł comes right after L and before M. In Swedish, a letter Å comes almost at the end, right after Z. In French diacritics marks have no impact on the alphabetical order, so the letters À, Á and Â are treated as the letter A when sorting strings.Acollationis a set of rules that defines how to compare and sort character strings. A collation tells you what the order of characters is and which characters should be treated as the same.
As a follow up to our article “The Most Useful Date and Time Functions in Oracle Database” , let’s review what date and time functions look like in MS SQL Server.Let’s start with functions that extract a year, month and day from a given date.SQL Server usesGETDATE()andSYSDATETIME()to get a current date and time. They are nondeterministic functions: views and expressions that reference these columns can’t be indexed. Both functions accept no arguments and return the local current date and time. The difference is, when we use
Date and time functions and formats are quite different in various databases. In this article, let’s review the most common functions that manipulates dates in an Oracle database.The functionSYSDATE()returns a 7 byte binary data element whose bytes represents:century,year,month,day,hour,minute,secondIt’s important to know thatselect sysdate from dualin SQL*Plus gives the same result asselect to_char(sysdate) from dualbecause SQL*Plus binds everything into character strings so it can print it. For more detailed explanation, look at
Sooner or later there comes a moment when database performance is no longer satisfactory. One of the very first things you should turn to when that happens is database indexing. This article will give you a general overview on what indexes are without digging into too much detail. We’ll discuss additional database index topics in future articles.In general, a database index is a data structure used to improve queries execution time. To explain what an index is, we need to say a few words on how the data stored in tables is organized.
Synonyms are a very powerful feature of Oracle. They are auxiliary names that relate to other database objects: tables, procedures, views, etc. They work like Unix hard links; a pointer to point to an object that exists somewhere else.Synonyms can be created asPRIVATE(by default) orPUBLIC.public synonyms are available to all users in the database.private synonyms exist only in specific user schema (they are available only to a user and to grantees for the underlying object)
The article describes what a subquery is and what these useful statements look like. We will cover basic examples with the IN, EXISTS, ANY, and ALL operators, look at subqueries in from and where clauses, and explore the difference between correlated and nested subqueries.First, let’s start with an example database. To present some of these statements we need to have an example table and fill it with some data.What is a Subquery?
SQL has been around for decades and supports a many billion dollar market. However, many people still struggle with just how to pronounce the term SQL. Is it “S.Q.L” [ˈɛs kjuː ˈɛl] or is it “sequel” [ˈsiːkwəl]?SQL… Where it all startedLet’s start at the beginning.Relational databases came into existence with E.F. Codd’s 1970 publication “A Relational Model of Data for Large Shared Data Banks.” While Codd’s ideas were remarkable for the time, in San Jose, California, two colleagues named
We’ve added an API to our Vertabelo application to help you incorporate some automation into your build system. Here is a short instruction on how to use the Vertabelo API.First of all, you have to enable access to the API. You’ll need have at least a “Basic” account plan or be a member of a company.1. From the User menu, go to the “My account” page Scroll down to the “Settings” section
In recent times, NoSQL databases have become a hot topic and have gained a crowd of advocates. Indeed, when NoSQL database started to arise, SQL-to-NoSQL converters arose with them as well. But unfortunately, thanks to the code conversion possibility, SQL supporters gained unquestionable evidence that in many cases the good, old SQL is much easier to use. Especially when it comes to queries.I performed a quick Google search and found plenty of nice converter tools. Here is one of them:
Everyone has solvedcrossword puzzlesand has certainly had some problems finding an appropriate word. Thanks to SQL, it is ridiculously simple to quickly dispel your crossword doubts and give you the correct answers. Of course, Google is commonly known as a universal cure for many doubts, but handling the problem yourself is much more rewarding.Recently I came across some simple and interesting examples from Andrew Cumming’s book “SQL Hacks.” These examples won’t make a huge impression on those who are programming experts, but less-experienced code wranglers could find them interesting. In this article I will try to present a funny approach to solving casual problems using the power of SQL according to Andrew Cumming’s book.
The standard DELETE statement in SQL returns the number of deleted rows.In PostgreSQL you can make DELETE statement return something else. You can return all rows that have been deleted.You can return the columns of your choice.In your code you can process the returned rows in the same way as you would process the results of an SQL query. For example, you may log the data that have been deleted.
Often you want to select a single row from each GROUP BY group. PostgreSQL has a statement especially for that: SELECT DISTINCT ON.Let’s say I want to select one weather report for each location.The query retrieves one weather report for each location.You can use several expressions in SELECT DISTINCT ON statement.For each resource the query retrieves a single backup data for each week.SELECT DISTINCT ON with ORDER BYThe undecorated SELECT DISTINCT ON selects one row for each group but
If your default programming language, like mine, is Java, you most likely wince at the very thought of date arithmetic. (It’s changed for the better with Java 8 but Vertabelo is not there yet.)The date arithmetic API in PostgreSQL is like abreath of fresh air.Interval ArithmeticYou can represent a time interval:Add an interval to a date or subtract an interval from a date:Truncating DatesYou can truncate dates to the specified precision.
Some time ago, the Vertabelo Team participated in the PostgreSQL Conference Europe 2013 . Some of the talks were really nice. One of them stuck in my head for quite a long time. It was Markus Winand’s lecture titled“Indexes: The neglected performance all-rounder.”Although I had had a solid background in databases, this 50 minutes long talk showed me that not everything concerning indexes was as clear to me as I had thought. This was the kind of lecture I like the most – when you sit and say to yourself “damn, I didn’t know that!”
In the previous article we wrote a simple Android app allowing the user to manage his ToDo list. He could add new tasks, mark them as done and delete them. That article showed how to create an SQLite DB in an automated way and how to do some simple CRUD operations on it.Let’s say that the first version of an application is released, we distribute it (i.e., it shows up in Google Play), people download and use it. After some time we decide to improve the app. Our goal is to add the ability to prioritize the tasks.
Generally, we don’t limit query results. However, when we only care about the first few rows or to implement table pagination, limiting query results is just what we need. Database vendors provide us with such functionality; most of them in their own distinct way.ExampleLet’s take a look at the 2014 Sochi Olympics Men’s Normal Hill Individual ski jumping results in theskijump_resultstable. There is no index on theskijump_resultstable. The following queries are examples of a Top-N and pagination query, which I will use in the following parts of the article.
According to the report of the International Data Corporation , Android operating system reached more than 80% market share during the 3rd quarter of 2013. Together with iOS, Android dominates in the mobile devices’ world.This made me think that it may be worth writing a few words about how to create mobile applications for these two mobile platforms. But not the kind of “hello world” applications – there are plenty of tutorials about that. I’d like to focus on the use of local
When you create a foreign key in your database, you can specify what happens upon delete of the parent row. There are usually four possibilities:ON DELETE SET NULLON DELETE CASCADEON DELETE NO ACTIONON DELETE RESTRICTToday we’ll investigate the subtle difference between the last two options.In Some Databases There Is No Difference at AllIn Oracle, there is no RESTRICT keyword . The only option is NO ACTION. In MySQL, there is
Suppose we design a database. We’ve created some tables, each one has a few columns. Now we need to choose columns to be primary keys (PK) and make references between tables. And here some inexperienced designers face the dilemma – should a primary key be natural or surrogate?There’s one and only one answer to that question:it depends. If anyone ever tried to convince you that you should have only natural keys or only surrogate keys, just smile :)
Recently I was given a task which involved counting, with a single SQL query, all tables in each model version of Vertabelo. Vertabelo internally stores each model version as an XML file (download a sample XML file ). So my task was to count all XML nodes satisfying a certain XPATH expression. Vertabelo runs on PostgreSQL 9.1, so PostgreSQL 9.1 tools were all I could use.To begin at the end, the final query looks like this:
group_concat in MySQLMySQL has a very handy function which concatenates strings from a group into one string. For example, let’s take a look at thechildrentable with data about parents’ and children’s names.To get the names of children of each person as a comma-separated string, you use thegroup_concatfunctions as follows:The result:To make sure the names of the children in each string arealphabetically orderedand to use semicolon „;” as aseparator, use this query: