Hibernate ORM 5.3.15.final User Guide
Hibernate ORM 5.3.15.final User Guide
Hibernate ORM 5.3.15.final User Guide
Table of Contents
Search
Preface
System Requirements
1. Architecture
2. Domain Model
3. Bootstrap
4. Schema generation
5. Persistence Context
6. Flushing
7. Database access
8. Transactions and concurrency control
9. JNDI
10. Locking
11. Fetching
12. Batching
13. Caching
14. Interceptors and events
15. HQL and JPQL
16. Criteria
17. Native SQL Queries
18. Spatial
19. Multitenancy
20. OSGi
21. Envers
22. Database Portability Considerations
23. Con�gurations
24. Mapping annotations
25. Performance Tuning and Best Practices
26. Legacy Bootstrapping
27. Migration
28. Legacy Domain Model
29. Legacy Hibernate Criteria Queries
30. Legacy Hibernate Native Queries
31. References
Preface
Working with both Object-Oriented software and Relational Databases can be cumbersome and time-consuming. Development
costs are significantly higher due to a paradigm mismatch between how data is represented in objects versus relational
databases. Hibernate is an Object/Relational Mapping solution for Java environments. The term Object/Relational Mapping
(http://en.wikipedia.org/wiki/Object-relational_mapping) refers to the technique of mapping data from an object model representation to
Hibernate not only takes care of the mapping from Java classes to database tables (and from Java data types to SQL data types),
but also provides data query and retrieval facilities. It can significantly reduce development time otherwise spent with manual
data handling in SQL and JDBC. Hibernate’s design goal is to relieve the developer from 95% of common data persistence-related
programming tasks by eliminating the need for manual, hand-crafted data processing using SQL and JDBC. However, unlike many
other persistence solutions, Hibernate does not hide the power of SQL from you and guarantees that your investment in
relational technology and knowledge is as valid as always.
Hibernate may not be the best solution for data-centric applications that only use stored-procedures to implement the business
logic in the database, it is most useful with object-oriented domain models and business logic in the Java-based middle-tier.
However, Hibernate can certainly help you to remove or encapsulate vendor-specific SQL code and will help with the common
task of result set translation from a tabular representation to a graph of objects.
Get Involved
Use Hibernate and report any bugs or issues you find. See Issue Tracker (http://hibernate.org/issuetracker) for details.
Try your hand at fixing some bugs or implementing enhancements. Again, see Issue Tracker (http://hibernate.org/issuetracker).
Engage with the community using mailing lists, forums, IRC, or other ways listed in the Community section
(http://hibernate.org/community).
Help improve or translate this documentation. Contact us on the developer mailing list if you have interest.
Spread the word. Let the rest of your organization know about the benefits of Hibernate.
System Requirements
Hibernate 5.2 and later versions require at least Java 1.8 and JDBC 4.2.
Hibernate 5.1 and older versions require at least Java 1.6 and JDBC 4.0.
When building Hibernate 5.1 or older from sources, you need Java 1.7 due to a bug in the JDK 1.6
compiler.
While having a strong background in SQL is not required to use Hibernate, it certainly helps a lot
because it all boils down to SQL statements. Probably even more important is an understanding of data
modeling principles. You might want to consider these resources as a good starting point:
Understanding the basics of transactions and design patterns such as Unit of Work PoEAA or Application
Transaction are important as well. These topics will be discussed in the documentation, but a prior
understanding will certainly help.
1. Architecture
1.1. Overview
Java Hibernate
Persistence Native
API API
Hibernate
JDBC
Relational Database
Hibernate, as an ORM solution, effectively "sits between" the Java application data access layer and the Relational Database, as
can be seen in the diagram above. The Java application makes use of the Hibernate APIs to load, store, query, etc its domain data.
Here we will introduce the essential Hibernate APIs. This will be a brief introduction; we will discuss these contracts in detail
later.
As a JPA provider, Hibernate implements the Java Persistence API specifications and the association between JPA interfaces and
Hibernate specific implementations can be visualized in the following diagram:
EntityManagerFactory Session
Transaction TransactionImpl
EntityTransaction
SessionFactory ( org.hibernate.SessionFactory )
A thread-safe (and immutable) representation of the mapping of the application domain model to a database. Acts as a factory
for org.hibernate.Session instances. The EntityManagerFactory is the JPA equivalent of a SessionFactory and
basically, those two converge into the same SessionFactory implementation.
A SessionFactory is very expensive to create, so, for any given database, the application should have only one associated
SessionFactory . The SessionFactory maintains services that Hibernate uses across all Session(s) such as second level
caches, connection pools, transaction system integrations, etc.
Session ( org.hibernate.Session )
A single-threaded, short-lived object conceptually modeling a "Unit of Work" PoEAA. In JPA nomenclature, the Session is
represented by an EntityManager .
Behind the scenes, the Hibernate Session wraps a JDBC java.sql.Connection and acts as a factory for
org.hibernate.Transaction instances. It maintains a generally "repeatable read" persistence context (first level cache) of
the application domain model.
Transaction ( org.hibernate.Transaction )
A single-threaded, short-lived object used by the application to demarcate individual physical transaction boundaries.
EntityTransaction is the JPA equivalent and both act as an abstraction API to isolate the application from the underlying
transaction system in use (JDBC or JTA).
2. Domain Model
The term domain model (https://en.wikipedia.org/wiki/Domain_model) comes from the realm of data modeling. It is the model that
ultimately describes the problem domain (https://en.wikipedia.org/wiki/Problem_domain) you are working in. Sometimes you will also
hear the term persistent classes.
Ultimately the application domain model is the central character in an ORM. They make up the classes you wish to map.
Hibernate works best if these classes follow the Plain Old Java Object (POJO) / JavaBean programming model. However, none of
these rules are hard requirements. Indeed, Hibernate assumes very little about the nature of your persistent objects. You can
express a domain model in other ways (using trees of java.util.Map instances, for example).
Historically applications using Hibernate would have used its proprietary XML mapping file format for this purpose. With the
coming of JPA, most of this information is now defined in a way that is portable across ORM/JPA providers using annotations
(and/or standardized XML format). This chapter will focus on JPA mapping where possible. For Hibernate mapping features not
supported by JPA we will prefer Hibernate extension annotations.
The Hibernate type is neither a Java type nor a SQL data type. It provides information about both of these as
well as understanding marshalling between.
When you encounter the term type in discussions of Hibernate, it may refer to the Java type, the JDBC type, or
the Hibernate type, depending on context.
To help understand the type categorizations, let’s look at a simple table and domain model that we wish to map.
SQL
create table Contact (
id integer not null,
first varchar(255),
last varchar(255),
middle varchar(255),
notes varchar(255),
starred boolean not null,
website varchar(255),
primary key (id)
)
JAVA
@Entity(name = "Contact")
public static class Contact {
@Id
private Integer id;
@Embeddable
public class Name {
Value types
Entity types
Looked at another way, all the state of an entity is made up entirely of value types. These state fields or JavaBean properties are
termed persistent attributes. The persistent attributes of the Contact class are value types.
Basic types
in mapping the Contact table, all attributes except for name would be basic types. Basic types are discussed in detail in Basic
Types
Embeddable types
the name attribute is an example of an embeddable type, which is discussed in details in Embeddable Types
Collection types
although not featured in the aforementioned example, collection types are also a distinct category among value types.
Collection types are further discussed in Collections
The first stage is determining a proper logical name from the domain model mapping. A logical name can be either explicitly
specified by the user (using @Column or @Table e.g.) or it can be implicitly determined by Hibernate through an
ImplicitNamingStrategy contract.
Second is the resolving of this logical name to a physical name which is defined by the PhysicalNamingStrategy contract.
Also, the NamingStrategy contract was often not flexible enough to properly apply a given naming "rule", either
because the API lacked the information to decide or because the API was honestly not well defined as it grew.
Due to these limitation, org.hibernate.cfg.NamingStrategy has been deprecated and then removed in favor of
ImplicitNamingStrategy and PhysicalNamingStrategy.
At the core, the idea behind each naming strategy is to minimize the amount of repetitive information a developer must provide
for mapping a domain model.
JPA Compatibility
JPA defines inherent rules about implicit logical name determination. If JPA provider portability is a major
concern, or if you really just like the JPA-defined implicit naming rules, be sure to stick with
ImplicitNamingStrategyJpaCompliantImpl (the default)
Also, JPA defines no separation between logical and physical name. Following the JPA specification, the logical
name is the physical name. If JPA provider portability is important, applications should prefer not to specify a
PhysicalNamingStrategy.
2.2.1. ImplicitNamingStrategy
When an entity does not explicitly name the database table that it maps to, we need to implicitly determine that table name. Or
when a particular attribute does not explicitly name the database column that it maps to, we need to implicitly determine that
column name. There are examples of the role of the org.hibernate.boot.model.naming.ImplicitNamingStrategy contract to
determine a logical name when the mapping did not provide an explicit name.
ImplicitNamingStrategy
ImplicitNamingStrategyJpaCompliantImpl
Hibernate defines multiple ImplicitNamingStrategy implementations out-of-the-box. Applications are also free to plug-in custom
implementations.
There are multiple ways to specify the ImplicitNamingStrategy to use. First, applications can specify the implementation using the
hibernate.implicit_naming_strategy configuration setting which accepts:
default
jpa
legacy-hbm
legacy-jpa
component-path
to specify the ImplicitNamingStrategy to use. See Bootstrap for additional details on bootstrapping.
2.2.2. PhysicalNamingStrategy
Many organizations define rules around the naming of database objects (tables, columns, foreign keys, etc). The idea of a
PhysicalNamingStrategy is to help implement such naming rules without having to hard-code them into the mapping via explicit
names.
While the purpose of an ImplicitNamingStrategy is to determine that an attribute named accountNumber maps to a logical
column name of accountNumber when not explicitly specified, the purpose of a PhysicalNamingStrategy would be, for example,
to say that the physical column name should instead be abbreviated acct_num .
It is true that the resolution to acct_num could have been handled in an ImplicitNamingStrategy in this
case. But the point is separation of concerns. The PhysicalNamingStrategy will be applied regardless of
whether the attribute explicitly specified the column name or whether we determined that implicitly.
The ImplicitNamingStrategy would only be applied if an explicit name was not given. So it depends on needs and
intent.
The default implementation is to simply use the logical name as the physical name. However applications and integrations can
define custom implementations of this PhysicalNamingStrategy contract. Here is an example PhysicalNamingStrategy for a
fictitious company named Acme Corp whose naming standards are to:
JAVA
/*
* Hibernate, Relational Persistence for Idiomatic Java
*
* License: GNU Lesser General Public License (LGPL), version 2.1 or later.
* See the lgpl.txt file in the root directory or <http://www.gnu.org/licenses/lgpl-2.1.html>.
*/
package org.hibernate.userguide.naming;
import java.util.LinkedList ;
import java.util.List;
import java.util.Locale ;
import java.util.Map;
import java.util.TreeMap ;
import org.hibernate.boot.model.naming.Identifier ;
import org.hibernate.boot.model.naming.PhysicalNamingStrategy ;
import org.hibernate.engine.jdbc.env.spi.JdbcEnvironment ;
import org.apache.commons.lang3.StringUtils ;
/**
* An example PhysicalNamingStrategy that implements database object naming standards
* for our fictitious company Acme Corp.
* <p/>
* In general Acme Corp prefers underscore-delimited words rather than camel casing.
* <p/>
* Additionally standards call for the replacement of certain words with abbreviations.
*
* @author Steve Ebersole
*/
public class AcmeCorpPhysicalNamingStrategy implements PhysicalNamingStrategy {
private static final Map<String ,String > ABBREVIATIONS = buildAbbreviationMap();
@Override
public Identifier toPhysicalCatalogName(Identifier name, JdbcEnvironment jdbcEnvironment) {
// Acme naming standards do not apply to catalog names
return name;
}
@Override
public Identifier toPhysicalSchemaName(Identifier name, JdbcEnvironment jdbcEnvironment) {
// Acme naming standards do not apply to schema names
return name;
}
@Override
public Identifier toPhysicalTableName(Identifier name, JdbcEnvironment jdbcEnvironment) {
final List<String > parts = splitAndReplace( name.getText() );
return jdbcEnvironment.getIdentifierHelper().toIdentifier(
join( parts ),
name.isQuoted()
);
}
@Override
public Identifier toPhysicalSequenceName(Identifier name, JdbcEnvironment jdbcEnvironment) {
final LinkedList <String > parts = splitAndReplace( name.getText() );
// Acme Corp says all sequences should end with _seq
if ( !"seq".equalsIgnoreCase( parts.getLast() ) ) {
parts.add( "seq" );
}
return jdbcEnvironment.getIdentifierHelper().toIdentifier(
join( parts ),
name.isQuoted()
);
}
@Override
public Identifier toPhysicalColumnName(Identifier name, JdbcEnvironment jdbcEnvironment) {
final List<String > parts = splitAndReplace( name.getText() );
return jdbcEnvironment.getIdentifierHelper().toIdentifier(
join( parts ),
name.isQuoted()
);
}
return word;
}
There are multiple ways to specify the PhysicalNamingStrategy to use. First, applications can specify the implementation using
the hibernate.physical_naming_strategy configuration setting which accepts:
Internally Hibernate uses a registry of basic types when it needs to resolve a specific org.hibernate.type.Type .
java.time.ZonedDateTime
To use these hibernate-spatial types, you must add the hibernate-spatial dependency to your
classpath and use a org.hibernate.spatial.SpatialDialect implementation. See Spatial for more
details about spatial types.
These mappings are managed by a service inside Hibernate called the org.hibernate.type.BasicTypeRegistry , which
essentially maintains a map of org.hibernate.type.BasicType (a org.hibernate.type.Type specialization) instances keyed
by a name. That is the purpose of the "BasicTypeRegistry key(s)" column in the previous tables.
JAVA
@Entity(name = "Product")
public class Product {
@Id
@Basic
private Integer id;
@Basic
private String sku;
@Basic
private String name;
@Basic
private String description;
}
JAVA
@Entity(name = "Product")
public class Product {
@Id
private Integer id;
The JPA specification strictly limits the Java types that can be marked as basic to the following listing:
java.lang.String
java.math.BigInteger
java.math.BigDecimal
java.util.Date
java.util.Calendar
java.sql.Date
java.sql.Time
java.sql.Timestamp
byte[] or Byte[]
char[] or Character[]
enums
any other type that implements Serializable (JPA’s "support" for Serializable types is to directly serialize
their state to the database).
If provider portability is a concern, you should stick to just these basic types. Note that JPA 2.1 did add the
notion of a javax.persistence.AttributeConverter to help alleviate some of these concerns; see JPA 2.1
AttributeConverters for more on this topic.
Defines whether this attribute allows nulls. JPA defines this as "a hint", which essentially means that it effect is specifically
required. As long as the type is not primitive, Hibernate takes this to mean that the underlying column should be NULLABLE .
Defines whether this attribute should be fetched eagerly or lazily. JPA says that EAGER is a requirement to the provider
(Hibernate) that the value should be fetched when the owner is fetched, while LAZY is merely a hint that the value is fetched
when the attribute is accessed. Hibernate ignores this setting for basic types unless you are using bytecode enhancement. See
the BytecodeEnhancement for additional information on fetching and on bytecode enhancement.
For basic type attributes, the implicit naming rule is that the column name is the same as the attribute name. If that implicit
naming rule does not meet your requirements, you can explicitly tell Hibernate (and other providers) the column name to use.
JAVA
@Entity(name = "Product")
public class Product {
@Id
private Integer id;
Here we use @Column to explicitly map the description attribute to the NOTES column, as opposed to the implicit column
name description .
The @Column annotation defines other mapping information as well. See its Javadocs for details.
2.3.4. BasicTypeRegistry
We said before that a Hibernate type is not a Java type, nor a SQL type, but that it understands both and performs the marshalling
between them. But looking at the basic type mappings from the previous examples, how did Hibernate know to use its
org.hibernate.type.StringType for mapping for java.lang.String attributes, or its org.hibernate.type.IntegerType
for mapping java.lang.Integer attributes?
The answer lies in a service inside Hibernate called the org.hibernate.type.BasicTypeRegistry , which essentially maintains
a map of org.hibernate.type.BasicType (a org.hibernate.type.Type specialization) instances keyed by a name.
We will see later, in the Explicit BasicTypes section, that we can explicitly tell Hibernate which BasicType to use for a particular
attribute. But first, let’s explore how implicit resolution works and how applications can adjust the implicit resolution.
A thorough discussion of the BasicTypeRegistry and all the different ways to contribute types to it is
beyond the scope of this documentation. Please see Integrations Guide for complete details.
As an example, take a String attribute such as we saw before with Product#sku. Since there was no explicit type mapping,
Hibernate looks to the BasicTypeRegistry to find the registered mapping for java.lang.String . This goes back to the
"BasicTypeRegistry key(s)" column we saw in the tables at the start of this chapter.
As a baseline within BasicTypeRegistry , Hibernate follows the recommended mappings of JDBC for Java types. JDBC
recommends mapping Strings to VARCHAR, which is the exact mapping that StringType handles. So that is the baseline
mapping within BasicTypeRegistry for Strings.
Applications can also extend (add new BasicType registrations) or override (replace an existing BasicType registration) using
one of the MetadataBuilder#applyBasicType methods or the MetadataBuilder#applyTypes method during bootstrap. For
more details, see Custom BasicTypes section.
Sometimes you want a particular attribute to be handled differently. Occasionally Hibernate will implicitly pick a BasicType
that you do not want (and for some reason you do not want to adjust the BasicTypeRegistry ).
In these cases, you must explicitly tell Hibernate the BasicType to use, via the org.hibernate.annotations.Type annotation.
JAVA
@Entity(name = "Product")
public class Product {
@Id
private Integer id;
This tells Hibernate to store the Strings as nationalized data. This is just for illustration purposes; for better ways to indicate
nationalized character data see Mapping Nationalized Character Data section.
Additionally, the description is to be handled as a LOB. Again, for better ways to indicate LOBs see Mapping LOBs section.
As a means of illustrating the different approaches, let’s consider a use case where we need to support a java.util.BitSet
mapping that’s stored as a VARCHAR.
Implementing a BasicType
The first approach is to directly implement the BasicType interface.
Because the BasicType interface has a lot of methods to implement, it’s much more convenient to extend the
AbstractStandardBasicType , or the AbstractSingleColumnStandardBasicType if the value is stored in a single
database column.
JAVA
public class BitSetType
extends AbstractSingleColumnStandardBasicType <BitSet >
implements DiscriminatorType <BitSet > {
public BitSetType () {
super ( VarcharTypeDescriptor .INSTANCE, BitSetTypeDescriptor .INSTANCE );
}
@Override
public BitSet stringToObject(String xml) throws Exception {
return fromString( xml );
}
@Override
public String objectToSQLString(BitSet value, Dialect dialect) throws Exception {
return toString( value );
}
@Override
public String getName() {
return "bitset";
}
JAVA
public BitSetTypeDescriptor () {
super ( BitSet .class );
}
@Override
public String toString(BitSet value) {
StringBuilder builder = new StringBuilder ();
for ( long token : value.toLongArray() ) {
if ( builder.length() > 0 ) {
builder.append( DELIMITER );
}
builder.append( Long.toString( token, 2 ) );
}
return builder.toString();
}
@Override
public BitSet fromString(String string ) {
if ( string == null || string .isEmpty() ) {
return null;
}
String [] tokens = string .split( DELIMITER );
long[] values = new long[tokens.length];
@SuppressWarnings({"unchecked"})
public <X> X unwrap(BitSet value, Class <X> type, WrapperOptions options) {
if ( value == null ) {
return null;
}
if ( BitSet .class .isAssignableFrom( type ) ) {
return (X) value;
}
if ( String .class .isAssignableFrom( type ) ) {
return (X) toString( value);
}
throw unknownUnwrap( type );
}
The unwrap method is used when passing a BitSet as a PreparedStatement bind parameter, while the wrap method is used
to transform the JDBC column value object (e.g. String in our case) to the actual mapping object type (e.g. BitSet in this
example).
The BasicType must be registered, and this can be done at bootstrapping time:
JAVA
configuration.registerTypeContributor( (typeContributions, serviceRegistry) -> {
typeContributions.contributeType( BitSetType .INSTANCE );
} );
JAVA
ServiceRegistry standardRegistry =
new StandardServiceRegistryBuilder ().build();
With the new BitSetType being registered as bitset , the entity mapping looks like this:
JAVA
@Entity(name = "Product")
public static class Product {
@Id
private Integer id;
Alternatively, you can use the @TypeDef and skip the registration phase:
JAVA
@Entity(name = "Product")
@TypeDef(
name = "bitset",
defaultForType = BitSet .class ,
typeClass = BitSetType .class
)
public static class Product {
@Id
private Integer id;
JAVA
BitSet bitSet = BitSet .valueOf( new long[] {1, 2, 3} );
When executing this unit test, Hibernate generates the following SQL statements:
JAVA
DEBUG SQL:92 -
insert
into
Product
(bitSet, id)
values
(?, ?)
TRACE BasicBinder :65 - binding parameter [1] as [VARCHAR] - [{0, 65, 128, 129}]
TRACE BasicBinder :65 - binding parameter [2] as [INTEGER] - [1]
DEBUG SQL:92 -
select
bitsettype0_.id as id1_0_0_,
bitsettype0_.bitSet as bitSet2_0_0_
from
Product bitsettype0_
where
bitsettype0_.id=?
As you can see, the BitSetType takes care of the Java-to-SQL and SQL-to-Java type conversion.
Implementing a UserType
The second approach is to implement the UserType interface.
JAVA
@Override
public int[] sqlTypes() {
return new int[] {StringType .INSTANCE.sqlType()};
}
@Override
public Class returnedClass() {
return BitSet .class ;
}
@Override
public boolean equals(Object x, Object y)
throws HibernateException {
return Objects .equals( x, y );
}
@Override
public int hashCode(Object x)
throws HibernateException {
return Objects .hashCode( x );
}
@Override
public Object nullSafeGet(
ResultSet rs, String [] names, SharedSessionContractImplementor session, Object owner)
throws HibernateException , SQLException {
String columnName = names[0];
String columnValue = (String ) rs.getObject( columnName );
log.debugv("Result set column {0} value is {1}", columnName, columnValue);
return columnValue == null ? null :
BitSetTypeDescriptor .INSTANCE.fromString( columnValue );
}
@Override
public void nullSafeSet(
PreparedStatement st, Object value, int index, SharedSessionContractImplementor session)
throws HibernateException , SQLException {
if ( value == null ) {
log.debugv("Binding null to parameter {0} ",index);
st.setNull( index, Types .VARCHAR );
}
else {
String stringValue = BitSetTypeDescriptor .INSTANCE.toString( (BitSet ) value );
log.debugv("Binding {0} to parameter {1} ", stringValue, index);
st.setString( index, stringValue );
}
}
@Override
public Object deepCopy(Object value)
throws HibernateException {
return value == null ? null :
BitSet .valueOf( BitSet .class .cast( value ).toLongArray() );
}
@Override
public boolean isMutable() {
return true;
}
@Override
public Serializable disassemble(Object value)
throws HibernateException {
return (BitSet ) deepCopy( value );
}
@Override
public Object assemble(Serializable cached, Object owner)
throws HibernateException {
return deepCopy( cached );
}
@Override
public Object replace(Object original, Object target, Object owner)
throws HibernateException {
return deepCopy( original );
}
}
JAVA
@Entity(name = "Product")
public static class Product {
@Id
private Integer id;
In this example, the UserType is registered under the bitset name, and this is done like this:
JAVA
configuration.registerTypeContributor( (typeContributions, serviceRegistry) -> {
typeContributions.contributeType( BitSetUserType .INSTANCE, "bitset");
} );
JAVA
ServiceRegistry standardRegistry =
new StandardServiceRegistryBuilder ().build();
Like BasicType , you can also register the UserType using a simple name.
JAVA
@Type( type = "org.hibernate.userguide.mapping.basic.BitSetUserType" )
When running the previous test case against the BitSetUserType entity mapping, Hibernate executed the following SQL
statements:
JAVA
DEBUG SQL:92 -
insert
into
Product
(bitSet, id)
values
(?, ?)
DEBUG SQL:92 -
select
bitsetuser0_.id as id1_0_0_,
bitsetuser0_.bitSet as bitSet2_0_0_
from
Product bitsetuser0_
where
bitsetuser0_.id=?
@Enumerated
The original JPA-compliant way to map enums was via the @Enumerated or @MapKeyEnumerated for map keys annotations,
working on the principle that the enum values are stored according to one of 2 strategies indicated by
javax.persistence.EnumType :
ORDINAL
stored according to the enum value’s ordinal position within the enum class, as indicated by java.lang.Enum#ordinal
STRING
JAVA
public enum PhoneType {
LAND_LINE,
MOBILE;
}
In the ORDINAL example, the phone_type column is defined as a (nullable) INTEGER type and would hold:
NULL
JAVA
@Entity(name = "Phone")
public static class Phone {
@Id
private Long id;
@Column(name = "phone_number")
private String number;
@Enumerated(EnumType .ORDINAL)
@Column(name = "phone_type")
private PhoneType type;
When persisting this entity, Hibernate generates the following SQL statement:
JAVA
Phone phone = new Phone ( );
phone.setId( 1L );
phone.setNumber( "123-456-78990" );
phone.setType( PhoneType .MOBILE );
entityManager.persist( phone );
SQL
In the STRING example, the phone_type column is defined as a (nullable) VARCHAR type and would hold:
NULL
LAND_LINE
MOBILE
JAVA
@Entity(name = "Phone")
public static class Phone {
@Id
private Long id;
@Column(name = "phone_number")
private String number;
@Enumerated(EnumType .STRING)
@Column(name = "phone_type")
private PhoneType type;
Persisting the same entity as in the @Enumerated(ORDINAL) example, Hibernate generates the following SQL statement:
SQL
INSERT INTO Phone (phone_number, phone_type, id)
VALUES ('123-456-78990', 'MOBILE', 1)
AttributeConverter
Let’s consider the following Gender enum which stores its values using the 'M' and 'F' codes.
JAVA
MALE( 'M' ),
FEMALE( 'F' );
You can map enums in a JPA compliant way using a JPA 2.1 AttributeConverter.
JAVA
@Entity(name = "Person")
public static class Person {
@Id
private Long id;
@Converter
public static class GenderConverter
implements AttributeConverter <Gender , Character > {
return value.getCode();
}
Here, the gender column is defined as a CHAR type and would hold:
NULL
'M'
'F'
For additional details on using AttributeConverters, see JPA 2.1 AttributeConverters section.
JPA explicitly disallows the use of an AttributeConverter with an attribute marked as @Enumerated . So
if using the AttributeConverter approach, be sure not to mark the attribute as @Enumerated .
JAVA
@Entity(name = "Photo")
public static class Photo {
@Id
private Integer id;
JAVA
public static class Caption {
@Override
public boolean equals(Object o) {
if ( this == o ) {
return true;
}
if ( o == null || getClass() != o.getClass() ) {
return false ;
}
Caption caption = (Caption ) o;
return text != null ? text.equals( caption.text ) : caption.text == null;
@Override
public int hashCode() {
return text != null ? text.hashCode() : 0;
}
}
JAVA
public static class CaptionConverter
implements AttributeConverter <Caption , String > {
@Override
public String convertToDatabaseColumn(Caption attribute) {
return attribute.getText();
}
@Override
public Caption convertToEntityAttribute(String dbData) {
return new Caption ( dbData );
}
}
Traditionally, you could only use the dbData Caption representation, which in our case is a String , when referencing the
caption entity property.
Example 28. Filtering by the Caption property using the DB data representation
JAVA
Photo photo = entityManager.createQuery(
"select p " +
"from Photo p " +
"where upper(caption) = upper(:caption) ", Photo .class )
.setParameter( "caption", "Nicolae Grigorescu" )
.getSingleResult();
In order to use the Java object Caption representation, you have to get the associated Hibernate Type .
Example 29. Filtering by the Caption property using the Java Object representation
JAVA
SessionFactory sessionFactory = entityManager.getEntityManagerFactory()
.unwrap( SessionFactory .class );
By passing the associated Hibernate Type , you can use the Caption object when binding the query parameter value.
JAVA
public class Money {
Now, we want to use the Money type when mapping the Account entity:
JAVA
public class Account {
Since Hibernate has no knowledge how to persist the Money type, we could use a JPA AttributeConverter to transform the
Money type as a Long . For this purpose, we are going to use the following MoneyConverter utility:
JAVA
@Override
public Long convertToDatabaseColumn(Money attribute) {
return attribute == null ? null : attribute.getCents();
}
@Override
public Money convertToEntityAttribute(Long dbData) {
return dbData == null ? null : new Money ( dbData );
}
}
To map the MoneyConverter using HBM configuration files you need to use the converted:: prefix in the type attribute of the
property element.
JAVA
<?xml version="1.0"?>
<!DOCTYPE hibernate-mapping PUBLIC
"-//Hibernate/Hibernate Mapping DTD 3.0//EN"
"http://www.hibernate.org/dtd/hibernate-mapping-3.0.dtd">
<hibernate-mapping package="org.hibernate.userguide.mapping.converter.hbm">
<class name="Account" table="account" >
<id name="id"/>
<property name="owner"/>
<property name="balance"
type="converted::org.hibernate.userguide.mapping.converter.hbm.MoneyConverter"/>
</class>
</hibernate-mapping>
Custom type
You can also map enums using a Hibernate custom type mapping. Let’s again revisit the Gender enum example, this time using a
custom Type to store the more standardized 'M' and 'F' codes.
JAVA
@Entity(name = "Person")
public static class Person {
@Id
private Long id;
public GenderType () {
super (
CharTypeDescriptor .INSTANCE,
GenderJavaTypeDescriptor .INSTANCE
);
}
@Override
protected boolean registerUnderJavaType() {
return true;
}
}
protected GenderJavaTypeDescriptor () {
super ( Gender .class );
}
Again, the gender column is defined as a CHAR type and would hold:
NULL
'M'
'F'
For additional details on using custom types, see Custom BasicTypes section.
JDBC LOB locators exist to allow efficient access to the LOB data. They allow the JDBC driver to stream parts of the LOB data as
needed, potentially freeing up memory space. However, they can be unnatural to deal with and have certain limitations. For
example, a LOB locator is only portably valid during the duration of the transaction in which it was obtained.
The idea of materialized LOBs is to trade-off the potential efficiency (not all drivers handle LOB data efficiently) for a more
natural programming paradigm using familiar Java types such as String or byte[] , etc for these LOBs.
Materialized deals with the entire LOB contents in memory, whereas LOB locators (in theory) allow streaming parts of the LOB
contents into memory as needed.
java.sql.Blob
java.sql.Clob
java.sql.NClob
Mapping materialized forms of these LOB values would use more familiar Java types such as String , char[] , byte[] , etc. The
trade-off for more familiar is usually performance.
Mapping CLOB
For a first look, let’s assume we have a CLOB column that we would like to map ( NCLOB character LOB data will be covered in
Mapping Nationalized Character Data section).
SQL
Let’s first map this using the @Lob JPA annotation and the java.sql.Clob type:
JAVA
@Entity(name = "Product")
public static class Product {
@Id
private Integer id;
@Lob
private Clob warranty;
To persist such an entity, you have to create a Clob using the ClobProxy Hibernate utility:
JAVA
String warranty = "My product warranty";
entityManager.persist( product );
To retrieve the Clob content, you need to transform the underlying java.io.Reader :
JAVA
Product product = entityManager.find( Product .class , productId );
We could also map the CLOB in a materialized form. This way, we can either use a String or a char[] .
JAVA
@Entity(name = "Product")
public static class Product {
@Id
private Integer id;
@Lob
private String warranty;
How JDBC deals with LOB data varies from driver to driver, and Hibernate tries to handle all these
variances on your behalf.
However, some drivers are trickier (e.g. PostgreSQL), and, in such cases, you may have to do some
extra to get LOBs working. Such discussions are beyond the scope of this guide.
We might even want the materialized data as a char array (although this might not be a very good idea).
JAVA
@Entity(name = "Product")
public static class Product {
@Id
private Integer id;
@Lob
private char[] warranty;
Mapping BLOB
BLOB data is mapped in a similar fashion.
SQL
JAVA
@Entity(name = "Product")
public static class Product {
@Id
private Integer id;
@Lob
private Blob image;
To persist such an entity, you have to create a Blob using the BlobProxy Hibernate utility:
JAVA
byte[] image = new byte[] {1, 2, 3};
entityManager.persist( product );
To retrieve the Blob content, you need to transform the underlying java.io.InputStream :
JAVA
Product product = entityManager.find( Product .class , productId );
JAVA
@Entity(name = "Product")
public static class Product {
@Id
private Integer id;
@Lob
private byte[] image;
NCHAR
NVARCHAR
LONGNVARCHAR
NCLOB
JAVA
CREATE TABLE Product (
id INTEGER NOT NULL ,
name VARCHAR(255) ,
warranty NVARCHAR(255) ,
PRIMARY KEY ( id )
)
To map a specific attribute to a nationalized variant data type, Hibernate defines the @Nationalized annotation.
JAVA
@Entity(name = "Product")
public static class Product {
@Id
private Integer id;
@Nationalized
private String warranty;
Just like with CLOB , Hibernate can also deal with NCLOB SQL data types:
JAVA
CREATE TABLE Product (
id INTEGER NOT NULL ,
name VARCHAR(255) ,
warranty nclob ,
PRIMARY KEY ( id )
)
JAVA
@Entity(name = "Product")
public static class Product {
@Id
private Integer id;
@Lob
@Nationalized
// Clob also works, because NClob extends Clob.
// The database type is still NCLOB either way and handled as such.
private NClob warranty;
To persist such an entity, you have to create an NClob using the NClobProxy Hibernate utility:
JAVA
entityManager.persist( product );
To retrieve the NClob content, you need to transform the underlying java.io.Reader :
JAVA
Product product = entityManager.find( Product .class , productId );
We could also map the NCLOB in a materialized form. This way, we can either use a String or a char[] .
JAVA
@Entity(name = "Product")
public static class Product {
@Id
private Integer id;
@Lob
@Nationalized
private String warranty;
JAVA
@Entity(name = "Product")
public static class Product {
@Id
private Integer id;
@Lob
@Nationalized
private char[] warranty;
If you application and database are entirely nationalized you may instead want to enable nationalized
character data as the default. You can do this via the hibernate.use_nationalized_character_data
setting or by calling MetadataBuilder#enableGlobalNationalizedCharacterDataSupport during
bootstrap.
The default UUID mapping is as binary because it represents more efficient storage. However, many
applications prefer the readability of character storage. To switch the default mapping, simply call
MetadataBuilder.applyBasicType( UUIDCharType.INSTANCE, UUID.class.getName() ) .
Chosen as the default simply because it is generally more efficient from a storage perspective.
When using one of the PostgreSQL Dialects, this becomes the default UUID mapping.
Maps the UUID using PostgreSQL’s specific UUID data type. The PostgreSQL JDBC driver chooses to map its UUID type to the
OTHER code. Note that this can cause difficulty as the driver chooses to map many different data types to OTHER .
DATE
Represents a calendar date by storing years, months and days. The JDBC equivalent is java.sql.Date
TIME
Represents the time of a day and it stores hours, minutes and seconds. The JDBC equivalent is java.sql.Time
TIMESTAMP
It stores both a DATE and a TIME plus nanoseconds. The JDBC equivalent is java.sql.Timestamp
To avoid dependencies on the java.sql package, it’s common to use the java.util or java.time
Date/Time classes instead.
While the java.sql classes define a direct association to the SQL Date/Time data types, the java.util or java.time
properties need to explicitly mark the SQL type correlation with the @Temporal annotation. This way, a java.util.Date or a
java.util.Calendar can be mapped to either an SQL DATE , TIME or TIMESTAMP type.
JAVA
@Entity(name = "DateEvent")
public static class DateEvent {
@Id
@GeneratedValue
private Long id;
@Column(name = "`timestamp`")
@Temporal(TemporalType .DATE)
private Date timestamp;
JAVA
DateEvent dateEvent = new DateEvent ( new Date() );
entityManager.persist( dateEvent );
SQL
INSERT INTO DateEvent ( timestamp, id )
VALUES ( '2015-12-29', 1 )
Only the year, month and the day field were saved into the database.
JAVA
@Column(name = "`timestamp`")
@Temporal(TemporalType .TIME)
private Date timestamp;
Hibernate will issue an INSERT statement containing the hour, minutes and seconds.
SQL
INSERT INTO DateEvent ( timestamp, id )
VALUES ( '16:51:58', 1 )
JAVA
@Column(name = "`timestamp`")
@Temporal(TemporalType .TIMESTAMP)
private Date timestamp;
Hibernate will include both the DATE , the TIME and the nanoseconds in the INSERT statement:
SQL
INSERT INTO DateEvent ( timestamp, id )
VALUES ( '2015-12-29 16:54:04.544', 1 )
Just like the java.util.Date , the java.util.Calendar requires the @Temporal annotation in order to
know what JDBC data type to be chosen: DATE, TIME or TIMESTAMP. If the java.util.Date marks a
point in time, the java.util.Calendar takes into consideration the default Time Zone.
The mapping between the standard SQL Date/Time types and the supported Java 8 Date/Time class types looks as follows;
DATE
java.time.LocalDate
TIME
java.time.LocalTime , java.time.OffsetTime
TIMESTAMP
java.time.Instant , java.time.LocalDateTime , java.time.OffsetDateTime and java.time.ZonedDateTime
Because the mapping between Java 8 Date/Time classes and the SQL types is implicit, there is not
need to specify the @Temporal annotation. Setting it on the java.time classes throws the following
exception:
When the time zone is not specified, the JDBC driver is going to use the underlying JVM default time zone, which might not be
suitable if the application is used from all across the globe. For this reason, it is very common to use a single reference time zone
(e.g. UTC) whenever saving/loading data from the database.
One alternative would be to configure all JVMs to use the reference time zone:
Declaratively
JAVA
java -Duser .timezone=UTC ...
Programmatically
JAVA
TimeZone .setDefault( TimeZone .getTimeZone( "UTC" ) );
With this configuration property in place, Hibernate is going to call the PreparedStatement.setTimestamp(int
parameterIndex, java.sql.Timestamp, Calendar cal)
(https://docs.oracle.com/javase/8/docs/api/java/sql/PreparedStatement.html#setTimestamp-int-java.sql.Timestamp-java.util.Calendar-) or
PreparedStatement.setTime(int parameterIndex, java.sql.Time x, Calendar cal)
(https://docs.oracle.com/javase/8/docs/api/java/sql/PreparedStatement.html#setTime-int-java.sql.Time-java.util.Calendar-), where the
java.util.Calendar references the time zone provided via the hibernate.jdbc.time_zone property.
With a custom AttributeConverter , the application developer can map a given JDBC type to an entity basic type.
In the following example, the java.time.Period is going to be mapped to a VARCHAR database column.
JAVA
@Converter
public class PeriodStringConverter
implements AttributeConverter <Period , String > {
@Override
public String convertToDatabaseColumn(Period attribute) {
return attribute.toString();
}
@Override
public Period convertToEntityAttribute(String dbData) {
return Period .parse( dbData );
}
}
To make use of this custom converter, the @Convert annotation must decorate the entity attribute.
JAVA
@Entity(name = "Event")
public static class Event {
@Id
@GeneratedValue
private Long id;
When persisting such entity, Hibernate will do the type conversion based on the AttributeConverter logic:
SQL
INSERT INTO Event ( span, id )
VALUES ( 'P1Y2M3D', 1 )
If the Java type is not known to Hibernate, you will encounter the following message:
“ HHH000481: Encountered Java type for which we could not locate a JavaTypeDescriptor and which does
not appear to implement equals and/or hashCode. This can lead to significant performance problems
when performing equality/dirty checking involving this Java type. Consider registering a custom
Whether a Java type is "known" means it has an entry in the JavaTypeDescriptorRegistry . While by default Hibernate loads
many JDK types into the JavaTypeDescriptorRegistry , an application can also expand the JavaTypeDescriptorRegistry by
adding new JavaTypeDescriptor entries.
This way, Hibernate will also know how to handle a specific Java Object type at the JDBC level.
Immutable types
If the entity attribute is a String , a primitive wrapper (e.g. Integer , Long ) an Enum type, or any other immutable Object
type, then you can only change the entity attribute value by reassigning it to a new value.
Considering we have the same Period entity attribute as illustrated in the JPA 2.1 AttributeConverters section:
JAVA
@Entity(name = "Event")
public static class Event {
@Id
@GeneratedValue
private Long id;
The only way to change the span attribute is to reassign it to a different value:
JAVA
Event event = entityManager.createQuery( "from Event", Event .class ).getSingleResult();
event .setSpan(Period
.ofYears( 3 )
.plusMonths( 2 )
.plusDays( 1 )
);
Mutable types
On the other hand, consider the following example where the Money type is a mutable.
JAVA
@Entity(name = "Account")
public static class Account {
@Id
private Long id;
@Override
public Long convertToDatabaseColumn(Money attribute) {
return attribute == null ? null : attribute.getCents();
}
@Override
public Money convertToEntityAttribute(Long dbData) {
return dbData == null ? null : new Money ( dbData );
}
}
A mutable Object allows you to modify its internal structure, and Hibernate dirty checking mechanism is going to propagate the
change to the database:
JAVA
Account account = entityManager.find( Account .class , 1L );
account.getBalance().setCents( 150 * 100L );
entityManager.persist( account );
Although the AttributeConverter types can be mutable so that dirty checking, deep copying and
second-level caching work properly, treating these as immutable (when they really are) is more
efficient.
For this reason, prefer immutable types over mutable ones whenever possible.
Once the reserved keywords are escaped, Hibernate will use the correct quotation style for the SQL Dialect . This is usually
double quotes, but SQL Server uses brackets and MySQL uses backticks.
JAVA
@Entity(name = "Product")
public static class Product {
@Id
private Long id;
@Column(name = "`name`")
private String name;
@Column(name = "`number`")
private String number;
JAVA
@Entity(name = "Product")
public static class Product {
@Id
private Long id;
@Column(name = "\"name\"")
private String name;
@Column(name = "\"number\"")
private String number;
Because name and number are reserved words, the Product entity mapping uses backticks to quote these column names.
When saving the following Product entity , Hibernate generates the following SQL insert statement:
JAVA
Product product = new Product ();
product.setId( 1L );
product.setName( "Mobile phone" );
product.setNumber( "123-456-7890" );
entityManager.persist( product );
SQL
INSERT INTO Product ("name", "number", id)
VALUES ('Mobile phone', '123-456-7890', 1)
Global quoting
Hibernate can also quote all identifiers (e.g. table, columns) using the following configuration property:
XML
<property
name="hibernate.globally_quoted_identifiers"
value="true"
/>
JAVA
@Entity(name = "Product")
public static class Product {
@Id
private Long id;
When persisting a Product entity, Hibernate is going to quote all identifiers as in the following example:
SQL
INSERT INTO "Product" ("name", "number", "id")
VALUES ('Mobile phone', '123-456-7890', 1)
As you can see, both the table name and all the column have been quoted.
For more about quoting-related configuration properties, check out the Mapping configurations section as well.
Properties marked as generated must additionally be non-insertable and non-updateable. Only @Version and @Basic types can
be marked as generated.
INSERT
the given property value is generated on insert but is not regenerated on subsequent updates. Properties like
creationTimestamp fall into this category.
ALWAYS
@Generated annotation
The @Generated (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/Generated.html) annotation is used so that
Hibernate can fetch the currently annotated property after the entity has been persisted or updated. For this reason, the
@Generated annotation accepts a GenerationTime
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/GenerationTime.html) enum value.
JAVA
@Entity(name = "Person")
public static class Person {
@Id
private Long id;
When the Person entity is persisted, Hibernate is going to fetch the calculated fullName column from the database, which
concatenates the first, middle, and last name.
JAVA
Person person = new Person ();
person.setId( 1L );
person.setFirstName( "John" );
person.setMiddleName1( "Flávio" );
person.setMiddleName2( "André" );
person.setMiddleName3( "Frederico" );
person.setMiddleName4( "Rúben" );
person.setMiddleName5( "Artur" );
person.setLastName( "Doe" );
entityManager.persist( person );
entityManager.flush();
SQL
INSERT INTO Person
(
firstName,
lastName,
middleName1,
middleName2,
middleName3,
middleName4,
middleName5,
id
)
values
(?, ?, ?, ?, ?, ?, ?, ?)
SELECT
p.fullName as fullName3_0_
FROM
Person p
WHERE
p.id=?
The same goes when the Person entity is updated. Hibernate is going to fetch the calculated fullName column from the
database after the entity is modified.
JAVA
Person person = entityManager.find( Person .class , 1L );
person.setLastName( "Doe Jr" );
entityManager.flush();
assertEquals("John Flávio André Frederico Rúben Artur Doe Jr", person.getFullName());
UPDATE
Person
SET
firstName=?,
lastName=?,
middleName1=?,
middleName2=?,
middleName3=?,
middleName4=?,
middleName5=?
WHERE
id=?
SELECT
p.fullName as fullName3_0_
FROM
Person p
WHERE
p.id=?
@GeneratorType annotation
The @GeneratorType (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/GeneratorType.html) annotation is used so
that you can provide a custom generator to set the value of the currently annotated property.
JAVA
private static final ThreadLocal <String > storage = new ThreadLocal <>();
@Override
public String generateValue(
Session session, Object owner) {
return CurrentUser .INSTANCE.get();
}
}
@Entity(name = "Person")
public static class Person {
@Id
private Long id;
When the Person entity is persisted, Hibernate is going to populate the createdBy column with the currently logged user.
JAVA
entityManager.persist( person );
} );
CurrentUser .INSTANCE.logOut();
SQL
INSERT INTO Person
(
createdBy,
firstName,
lastName,
updatedBy,
id
)
VALUES
(?, ?, ?, ?, ?)
The same goes when the Person entity is updated. Hibernate is going to populate the updatedBy column with the currently
logged user.
JAVA
CurrentUser .INSTANCE.logIn( "Bob" );
CurrentUser .INSTANCE.logOut();
SQL
UPDATE Person
SET
createdBy = ?,
firstName = ?,
lastName = ?,
updatedBy = ?
WHERE
id = ?
@CreationTimestamp annotation
The @CreationTimestamp annotation instructs Hibernate to set the annotated entity attribute with the current timestamp value
of the JVM when the entity is being persisted.
java.util.Date
java.util.Calendar
java.sql.Date
java.sql.Time
java.sql.Timestamp
JAVA
@Entity(name = "Event")
public static class Event {
@Id
@GeneratedValue
private Long id;
@Column(name = "`timestamp`")
@CreationTimestamp
private Date timestamp;
When the Event entity is persisted, Hibernate is going to populate the underlying timestamp column with the current JVM
timestamp value:
JAVA
Event dateEvent = new Event ( );
entityManager.persist( dateEvent );
SQL
INSERT INTO Event ("timestamp", id)
VALUES (?, ?)
@UpdateTimestamp annotation
The @UpdateTimestamp annotation instructs Hibernate to set the annotated entity attribute with the current timestamp value of
the JVM when the entity is being persisted.
java.util.Date
java.util.Calendar
java.sql.Date
java.sql.Time
java.sql.Timestamp
JAVA
@Entity(name = "Bid")
public static class Bid {
@Id
@GeneratedValue
private Long id;
@Column(name = "updated_on")
@UpdateTimestamp
private Date updatedOn;
@Column(name = "updated_by")
private String updatedBy;
When the Bid entity is persisted, Hibernate is going to populate the underlying updated_on column with the current JVM
timestamp value:
JAVA
Bid bid = new Bid();
bid.setUpdatedBy( "John Doe" );
bid.setCents( 150 * 100L );
entityManager.persist( bid );
SQL
When updating the Bid entity, Hibernate is going to modify the updated_on column with the current JVM timestamp value:
JAVA
Bid bid = entityManager.find( Bid.class , 1L );
SQL
UPDATE Bid SET
cents = ?,
updated_by = ?,
updated_on = ?
where
id = ?
@ValueGenerationType meta-annotation
Hibernate 4.3 introduced the @ValueGenerationType meta-annotation, which is a new approach to declaring generated
attributes or customizing generators.
@Generated has been retrofitted to use the @ValueGenerationType meta-annotation. But @ValueGenerationType exposes
more features than what @Generated currently supports, and, to leverage some of those features, you’d simply wire up a new
generator annotation.
As you’ll see in the following examples, the @ValueGenerationType meta-annotation is used when declaring the custom
annotation used to mark the entity properties that need a specific generation strategy. The actual generation logic must be added
to the class that implements the AnnotationValueGeneration interface.
Database-generated values
For example, let’s say we want the timestamps to be generated by calls to the standard ANSI SQL function current_timestamp
(rather than triggers or DEFAULT values):
JAVA
@Entity(name = "Event")
public static class Event {
@Id
@GeneratedValue
private Long id;
@Column(name = "`timestamp`")
@FunctionCreationTimestamp
private Date timestamp;
@Override
public void initialize(FunctionCreationTimestamp annotation, Class <?> propertyType) {
}
/**
* Generate value on INSERT
* @return when to generate the value
*/
public GenerationTiming getGenerationTiming() {
return GenerationTiming .INSERT;
}
/**
* Returns null because the value is generated by the database.
* @return null
*/
public ValueGenerator <?> getValueGenerator() {
return null;
}
/**
* Returns true because the value is generated by the database.
* @return true
*/
public boolean referenceColumnInSql() {
return true;
}
/**
* Returns the database-generated value
* @return database-generated value
*/
public String getDatabaseGeneratedReferencedColumnValue() {
return "current_timestamp";
}
}
When persisting an Event entity, Hibernate generates the following SQL statement:
SQL
INSERT INTO Event ("timestamp", id)
VALUES (current_timestamp, 1)
As you can see, the current_timestamp value was used for assigning the timestamp column value.
In-memory-generated values
If the timestamp value needs to be generated in-memory, the following mapping must be used instead:
JAVA
@Entity(name = "Event")
public static class Event {
@Id
@GeneratedValue
private Long id;
@Column(name = "`timestamp`")
@FunctionCreationTimestamp
private Date timestamp;
@Override
public void initialize(FunctionCreationTimestamp annotation, Class <?> propertyType) {
}
/**
* Generate value on INSERT
* @return when to generate the value
*/
public GenerationTiming getGenerationTiming() {
return GenerationTiming .INSERT;
}
/**
* Returns the in-memory generated value
* @return {@code true}
*/
public ValueGenerator <?> getValueGenerator() {
return (session, owner) -> new Date( );
}
/**
* Returns false because the value is generated by the database.
* @return false
*/
public boolean referenceColumnInSql() {
return false ;
}
/**
* Returns null because the value is generated in-memory.
* @return null
*/
public String getDatabaseGeneratedReferencedColumnValue() {
return null;
}
}
When persisting an Event entity, Hibernate generates the following SQL statement:
SQL
INSERT INTO Event ("timestamp", id)
VALUES ('Tue Mar 01 10:58:18 EET 2016', 1)
As you can see, the new Date() object value was used for assigning the timestamp column value.
JAVA
@Entity(name = "Employee")
public static class Employee {
@Id
private Long id;
@NaturalId
private String username;
@Column(name = "pswd")
@ColumnTransformer(
read = "decrypt( 'AES', '00', pswd )",
write = "encrypt('AES', '00', ?)"
)
private String password;
@ManyToMany(mappedBy = "employees")
private List<Project > projects = new ArrayList <>();
You can use the plural form @ColumnTransformers if more than one columns need to define either of
these rules.
If a property uses more than one column, you must use the forColumn attribute to specify which column, the expressions are
targeting.
JAVA
@Entity(name = "Savings")
public static class Savings {
@Id
private Long id;
@Type(type = "org.hibernate.userguide.mapping.basic.MonetaryAmountUserType")
@Columns(columns = {
@Column(name = "money"),
@Column(name = "currency")
})
@ColumnTransformer(
forColumn = "money",
read = "money / 100",
write = "? * 100"
)
private MonetaryAmount wallet;
Hibernate applies the custom expressions automatically whenever the property is referenced in a query. This functionality is
similar to a derived-property @Formula with two differences:
The property is backed by one or more columns that are exported as part of automatic schema generation.
The write expression, if specified, must contain exactly one '?' placeholder for the value.
JAVA
doInJPA( this::entityManagerFactory, entityManager -> {
Savings savings = new Savings ( );
savings.setId( 1L );
savings.setWallet( new MonetaryAmount ( BigDecimal .TEN, Currency .getInstance( Locale .US ) ) );
entityManager.persist( savings );
} );
SQL
INSERT INTO Savings (money, currency, id)
VALUES (10 * 100, 'USD', 1)
SELECT
s.id as id1_0_0_,
s.money / 100 as money2_0_0_,
s.currency as currency3_0_0_
FROM
Savings s
WHERE
s.id = 1
2.3.20. @Formula
Sometimes, you want the Database to do some computation for you rather than in the JVM, you might also create some kind of
virtual column. You can use a SQL fragment (aka formula) instead of mapping a property into a column. This kind of property is
read-only (its value is calculated by your formula fragment)
You should be aware that the @Formula annotation takes a native SQL clause which can affect
database portability.
JAVA
@Entity(name = "Account")
public static class Account {
@Id
private Long id;
When loading the Account entity, Hibernate is going to calculate the interest property using the configured @Formula :
JAVA
doInJPA( this::entityManagerFactory, entityManager -> {
Account account = new Account ( );
account.setId( 1L );
account.setCredit( 5000d );
account.setRate( 1.25 / 100 );
entityManager.persist( account );
} );
SQL
SELECT
a.id as id1_0_0_,
a.credit as credit2_0_0_,
a.rate as rate3_0_0_,
a.credit * a.rate as formula0_0_
FROM
Account a
WHERE
a.id = 1
The SQL fragment can be as complex as you want and even include subselects.
2.3.21. @Where
Sometimes, you want to filter out entities or collections using custom SQL criteria. This can be achieved using the @Where
annotation, which can be applied to entities and collections.
JAVA
@Entity(name = "Client")
public static class Client {
@Id
private Long id;
@Entity(name = "Account")
@Where( clause = "active = true" )
public static class Account {
@Id
private Long id;
@ManyToOne
private Client client;
@Column(name = "account_type")
@Enumerated(EnumType .STRING)
private AccountType type;
JAVA
SQL
INSERT INTO Client (name, id)
VALUES ('John Doe', 1)
When executing an Account entity query, Hibernate is going to filter out all records that are not active.
JAVA
doInJPA( this::entityManagerFactory, entityManager -> {
List<Account > accounts = entityManager.createQuery(
"select a from Account a", Account .class )
.getResultList();
assertEquals( 2, accounts.size());
} );
SQL
SELECT
a.id as id1_0_,
a.active as active2_0_,
a.amount as amount3_0_,
a.client_id as client_i6_0_,
a.rate as rate4_0_,
a.account_type as account_5_0_
FROM
Account a
WHERE ( a.active = true )
When fetching the debitAccounts or the creditAccounts collections, Hibernate is going to apply the @Where clause filtering
criteria to the associated child entities.
JAVA
doInJPA( this::entityManagerFactory, entityManager -> {
Client client = entityManager.find( Client .class , 1L );
assertEquals( 1, client.getCreditAccounts().size() );
assertEquals( 1, client.getDebitAccounts().size() );
} );
SQL
SELECT
c.client_id as client_i6_0_0_,
c.id as id1_0_0_,
c.id as id1_0_1_,
c.active as active2_0_1_,
c.amount as amount3_0_1_,
c.client_id as client_i6_0_1_,
c.rate as rate4_0_1_,
c.account_type as account_5_0_1_
FROM
Account c
WHERE ( c.active = true and c.account_type = 'CREDIT' ) AND c.client_id = 1
SELECT
d.client_id as client_i6_0_0_,
d.id as id1_0_0_,
d.id as id1_0_1_,
d.active as active2_0_1_,
d.amount as amount3_0_1_,
d.client_id as client_i6_0_1_,
d.rate as rate4_0_1_,
d.account_type as account_5_0_1_
FROM
Account d
WHERE ( d.active = true and d.account_type = 'DEBIT' ) AND d.client_id = 1
2.3.22. @WhereJoinTable
Just like @Where annotation, @WhereJoinTable is used to filter out collections using a joined table (e.g. @ManyToMany
association).
JAVA
@Entity(name = "Book")
public static class Book {
@Id
private Long id;
@ManyToMany
@JoinTable(
name = "Book_Reader",
joinColumns = @JoinColumn(name = "book_id"),
inverseJoinColumns = @JoinColumn(name = "reader_id")
)
@WhereJoinTable( clause = "created_on > DATEADD( 'DAY', -7, CURRENT_TIMESTAMP() )")
private List<Reader > currentWeekReaders = new ArrayList <>( );
@Entity(name = "Reader")
public static class Reader {
@Id
private Long id;
SQL
In the example above, the current week Reader entities are included in the currentWeekReaders collection which uses the
@WhereJoinTable annotation to filter the joined table rows according to the provided SQL clause.
Considering that the following two Book_Reader entries are added into our system:
JAVA
statement.executeUpdate(
"INSERT INTO Book_Reader " +
" (book_id, reader_id) " +
"VALUES " +
" (1, 1) "
);
statement.executeUpdate(
"INSERT INTO Book_Reader " +
" (book_id, reader_id, created_on) " +
"VALUES " +
" (1, 2, DATEADD( 'DAY', -10, CURRENT_TIMESTAMP() )) "
);
When fetching the currentWeekReaders collection, Hibernate is going to find one one entry:
JAVA
Book book = entityManager.find( Book.class , 1L );
assertEquals( 1, book.getCurrentWeekReaders().size() );
2.3.23. @Filter
The @Filter annotation is another way to filter out entities or collections using custom SQL criteria. Unlike the @Where
annotation, @Filter allows you to parameterize the filter clause at runtime.
JAVA
@Entity(name = "Account")
@FilterDef(
name="activeAccount",
parameters = @ParamDef(
name="active",
type="boolean"
)
)
@Filter(
name="activeAccount",
condition="active_status = :active"
)
public static class Account {
@Id
private Long id;
@Column(name = "account_type")
@Enumerated(EnumType .STRING)
private AccountType type;
@Column(name = "active_status")
private boolean active;
This mapping was done to show you that the @Filter condition uses a SQL condition and not a JPQL
filtering predicate.
As already explained, we can also apply the @Filter annotation for collections as illustrated by the Client entity:
JAVA
@Entity(name = "Client")
public static class Client {
@Id
private Long id;
@OneToMany(
mappedBy = "client",
cascade = CascadeType .ALL
)
@Filter(
name="activeAccount",
condition="active_status = :active"
)
private List<Account > accounts = new ArrayList <>( );
If we persist a Client with three associated Account entities, Hibernate will execute the following SQL statements:
JAVA
Client client = new Client ()
.setId( 1L )
.setName( "John Doe" );
client.addAccount(
new Account ()
.setId( 1L )
.setType( AccountType .CREDIT )
.setAmount( 5000d )
.setRate( 1.25 / 100 )
.setActive( true )
);
client.addAccount(
new Account ()
.setId( 2L )
.setType( AccountType .DEBIT )
.setAmount( 0d )
.setRate( 1.05 / 100 )
.setActive( false )
);
client.addAccount(
new Account ()
.setType( AccountType .DEBIT )
.setId( 3L )
.setAmount( 250d )
.setRate( 1.05 / 100 )
.setActive( true )
);
entityManager.persist( client );
SQL
INSERT INTO Client (name, id)
VALUES ('John Doe', 1)
By default, without explicitly enabling the filter, Hibernate is going to fetch all Account entities.
JAVA
List<Account > accounts = entityManager.createQuery(
"select a from Account a", Account .class )
.getResultList();
assertEquals( 3, accounts.size());
SQL
SELECT
a.id as id1_0_,
a.active_status as active2_0_,
a.amount as amount3_0_,
a.client_id as client_i6_0_,
a.rate as rate4_0_,
a.account_type as account_5_0_
FROM
Account a
If the filter is enabled and the filter parameter value is provided, then Hibernate is going to apply the filtering criteria to the
associated Account entities.
JAVA
entityManager
.unwrap( Session .class )
.enableFilter( "activeAccount" )
.setParameter( "active", true);
assertEquals( 2, accounts.size());
SQL
SELECT
a.id as id1_0_,
a.active_status as active2_0_,
a.amount as amount3_0_,
a.client_id as client_i6_0_,
a.rate as rate4_0_,
a.account_type as account_5_0_
FROM
Account a
WHERE
a.active_status = true
Therefore, in the following example, the filter is not taken into consideration when fetching an entity
from the Persistence Context.
assertFalse( account.isActive() );
SQL
SELECT
a.id as id1_0_0_,
a.active_status as active2_0_0_,
a.amount as amount3_0_0_,
a.client_id as client_i6_0_0_,
a.rate as rate4_0_0_,
a.account_type as account_5_0_0_,
c.id as id1_1_1_,
c.name as name2_1_1_
FROM
Account a
WHERE
a.id = 2
As you can see from the example above, contrary to an entity query, the filter does not prevent the entity from
being loaded.
Just like with entity queries, collections can be filtered as well, but only if the filter is explicitly enabled on the currently running
Hibernate Session .
JAVA
assertEquals( 3, client.getAccounts().size() );
SQL
SELECT
c.id as id1_1_0_,
c.name as name2_1_0_
FROM
Client c
WHERE
c.id = 1
SELECT
a.id as id1_0_,
a.active_status as active2_0_,
a.amount as amount3_0_,
a.client_id as client_i6_0_,
a.rate as rate4_0_,
a.account_type as account_5_0_
FROM
Account a
WHERE
a.client_id = 1
When activating the @Filter and fetching the accounts collections, Hibernate is going to apply the filter condition to the
associated collection entries.
JAVA
entityManager
.unwrap( Session .class )
.enableFilter( "activeAccount" )
.setParameter( "active", true);
assertEquals( 2, client.getAccounts().size() );
SQL
SELECT
c.id as id1_1_0_,
c.name as name2_1_0_
FROM
Client c
WHERE
c.id = 1
SELECT
a.id as id1_0_,
a.active_status as active2_0_,
a.amount as amount3_0_,
a.client_id as client_i6_0_,
a.rate as rate4_0_,
a.account_type as account_5_0_
FROM
Account a
WHERE
accounts0_.active_status = true
and a.client_id = 1
The main advantage of @Filter over the @Where clause is that the filtering criteria can be customized
at runtime.
It’s not possible to combine the @Filter and @Cache collection annotations. This limitation is due to
ensuring consistency and because the filtering information is not stored in the second-level cache.
If caching were allowed for a currently filtered collection, then the second-level cache would store only
a subset of the whole collection. Afterward, every other Session will get the filtered collection from the cache,
even if the Session-level filters have not been explicitly activated.
For this reason, the second-level collection cache is limited to storing whole collections, and not subsets.
2.3.24. @FilterJoinTable
When using the @Filter annotation with collections, the filtering is done against the child entries (entities or embeddables).
However, if you have a link table between the parent entity and the child table, then you need to use the @FilterJoinTable to
filter child entries according to some column contained in the join table.
The @FilterJoinTable annotation can be, therefore, applied to a unidirectional @OneToMany collection as illustrated in the
following mapping:
JAVA
@Entity(name = "Client")
@FilterDef(
name="firstAccounts",
parameters=@ParamDef(
name="maxOrderId",
type="int"
)
)
@Filter(
name="firstAccounts",
condition="order_id <= :maxOrderId"
)
public static class Client {
@Id
private Long id;
@Entity(name = "Account")
public static class Account {
@Id
private Long id;
@Column(name = "account_type")
@Enumerated(EnumType .STRING)
private AccountType type;
The firstAccounts filter will allow us to get only the Account entities that have the order_id (which tells the position of
every entry inside the accounts collection) less than a given number (e.g. maxOrderId ).
JAVA
Client client = new Client ()
.setId( 1L )
.setName( "John Doe" );
client.addAccount(
new Account ()
.setId( 1L )
.setType( AccountType .CREDIT )
.setAmount( 5000d )
.setRate( 1.25 / 100 )
);
client.addAccount(
new Account ()
.setId( 2L )
.setType( AccountType .DEBIT )
.setAmount( 0d )
.setRate( 1.05 / 100 )
);
client.addAccount(
new Account ()
.setType( AccountType .DEBIT )
.setId( 3L )
.setAmount( 250d )
.setRate( 1.05 / 100 )
);
entityManager.persist( client );
SQL
INSERT INTO Client (name, id)
VALUES ('John Doe', 1)
The collections can be filtered only if the associated filter is enabled on the currently running Hibernate Session .
Example 99. Traversing collections mapped with @FilterJoinTable without enabling the filter
JAVA
Client client = entityManager.find( Client .class , 1L );
assertEquals( 3, client.getAccounts().size());
SQL
SELECT
ca.Client_id as Client_i1_2_0_ ,
ca.accounts_id as accounts2_2_0_,
ca.order_id as order_id3_0_,
a.id as id1_0_1_,
a.amount as amount3_0_1_,
a.rate as rate4_0_1_,
a.account_type as account_5_0_1_
FROM
Client_Account ca
INNER JOIN
Account a
ON ca.accounts_id=a.id
WHERE
ca.Client_id = ?
If we enable the filter and set the maxOrderId to 1 when fetching the accounts collections, Hibernate is going to apply the
@FilterJoinTable clause filtering criteria, and we will get just 2 Account entities, with the order_id values of 0 and 1 .
JAVA
Client client = entityManager.find( Client .class , 1L );
entityManager
.unwrap( Session .class )
.enableFilter( "firstAccounts" )
.setParameter( "maxOrderId", 1);
assertEquals( 2, client.getAccounts().size());
SQL
SELECT
ca.Client_id as Client_i1_2_0_ ,
ca.accounts_id as accounts2_2_0_,
ca.order_id as order_id3_0_,
a.id as id1_0_1_,
a.amount as amount3_0_1_,
a.rate as rate4_0_1_,
a.account_type as account_5_0_1_
FROM
Client_Account ca
INNER JOIN
Account a
ON ca.accounts_id=a.id
WHERE
ca.order_id <= ?
AND ca.Client_id = ?
JAVA
@Entity(name = "Account")
@Table(name = "account")
@SecondaryTable(
name = "account_details"
)
@SQLDelete(
sql = "UPDATE account_details SET deleted = true WHERE id = ? "
)
@FilterDef(
name="activeAccount",
parameters = @ParamDef(
name="active",
type="boolean"
)
)
@Filter(
name="activeAccount",
condition="{a}.active = :active and {ad}.deleted = false",
aliases = {
@SqlFragmentAlias( alias = "a", table= "account"),
@SqlFragmentAlias( alias = "ad", table= "account_details"),
}
)
public static class Account {
@Id
private Long id;
@Column(table = "account_details")
private boolean deleted;
Now, when fetching the Account entities and activating the filter, Hibernate is going to apply the right table aliases to the filter
predicates:
JAVA
entityManager
.unwrap( Session .class )
.enableFilter( "activeAccount" )
.setParameter( "active", true);
SQL
select
filtersqlf0_.id as id1_0_,
filtersqlf0_.active as active2_0_,
filtersqlf0_.amount as amount3_0_,
filtersqlf0_.rate as rate4_0_,
filtersqlf0_1_.deleted as deleted1_1_
from
account filtersqlf0_
left outer join
account_details filtersqlf0_1_
on filtersqlf0_.id=filtersqlf0_1_.id
where
filtersqlf0_.active = ?
and filtersqlf0_1_.deleted = false
It is impossible to specify a foreign key constraint for this kind of association. This is not the usual way
of mapping polymorphic associations and you should use this only in special cases (e.g. audit logs,
user session data, etc).
The @Any annotation describes the column holding the metadata information. To link the value of the metadata information and
an actual entity type, the @AnyDef and @AnyDefs annotations are used. The metaType attribute allows the application to
specify a custom type that maps database column values to persistent classes that have identifier properties of the type specified
by idType . You must specify the mapping from values of the metaType to class names.
For the next examples, consider the following Property class hierarchy:
JAVA
String getName();
T getValue();
}
@Entity
@Table(name="integer_property")
public class IntegerProperty implements Property <Integer > {
@Id
private Long id;
@Column(name = "`name`")
private String name;
@Column(name = "`value`")
private Integer value;
@Override
public String getName() {
return name;
}
@Override
public Integer getValue() {
return value;
}
@Entity
@Table(name="string_property")
public class StringProperty implements Property <String > {
@Id
private Long id;
@Column(name = "`name`")
private String name;
@Column(name = "`value`")
private String value;
@Override
public String getName() {
return name;
}
@Override
public String getValue() {
return value;
}
A PropertyHolder can reference any such property, and, because each Property belongs to a separate table, the @Any
annotation is, therefore, required.
JAVA
@Entity
@Table( name = "property_holder" )
public class PropertyHolder {
@Id
private Long id;
@Any(
metaDef = "PropertyMetaDef",
metaColumn = @Column( name = "property_type" )
)
@JoinColumn( name = "property_id" )
private Property property ;
SQL
CREATE TABLE property_holder (
id BIGINT NOT NULL,
property_type VARCHAR(255),
property_id BIGINT,
PRIMARY KEY ( id )
)
As you can see, there are two columns used to reference a Property instance: property_id and property_type . The
property_id is used to match the id column of either the string_property or integer_property tables, while the
property_type is used to match the string_property or the integer_property table.
The table resolving mapping is defined by the metaDef attribute which references an @AnyMetaDef mapping. Although the
@AnyMetaDef mapping could be set right next to the @Any annotation, it’s good practice to reuse it, therefore it makes sense to
configure it on a class or package-level basis.
JAVA
@AnyMetaDef( name= "PropertyMetaDef", metaType = "string", idType = "long",
metaValues = {
@MetaValue(value = "S", targetEntity = StringProperty .class ),
@MetaValue(value = "I", targetEntity = IntegerProperty .class )
}
)
package org.hibernate.userguide.mapping.basic.any;
import org.hibernate.annotations.AnyMetaDef ;
import org.hibernate.annotations.MetaValue ;
If we persist an IntegerProperty as well as a StringProperty entity, and associate the StringProperty entity with a
PropertyHolder , Hibernate will generate the following SQL queries:
JAVA
IntegerProperty ageProperty = new IntegerProperty ();
ageProperty.setId( 1L );
ageProperty.setName( "age" );
ageProperty.setValue( 23 );
session.persist( ageProperty );
session.persist( nameProperty );
session.persist( namePropertyHolder );
SQL
INSERT INTO integer_property
( "name", "value", id )
VALUES ( 'age', 23, 1 )
When fetching the PropertyHolder entity and navigating its property association, Hibernate will fetch the associated
StringProperty entity like this:
JAVA
PropertyHolder propertyHolder = session.get( PropertyHolder .class , 1L );
assertEquals("name", propertyHolder.getProperty().getName());
assertEquals("John Doe", propertyHolder.getProperty().getValue());
SQL
@ManyToAny mapping
The @Any mapping is useful to emulate a @ManyToOne association when there can be multiple target entities. To emulate a
@OneToMany association, the @ManyToAny annotation must be used.
In the following example, the PropertyRepository entity has a collection of Property entities.
The repository_properties link table holds the associations between PropertyRepository and Property entities.
JAVA
@Entity
@Table( name = "property_repository" )
public class PropertyRepository {
@Id
private Long id;
@ManyToAny(
metaDef = "PropertyMetaDef",
metaColumn = @Column( name = "property_type" )
)
@Cascade( { org.hibernate.annotations.CascadeType .ALL })
@JoinTable(name = "repository_properties",
joinColumns = @JoinColumn(name = "repository_id"),
inverseJoinColumns = @JoinColumn(name = "property_id")
)
private List<Property <?>> properties = new ArrayList <>( );
SQL
CREATE TABLE property_repository (
id BIGINT NOT NULL,
PRIMARY KEY ( id )
)
If we persist an IntegerProperty as well as a StringProperty entity, and associate both of them with a PropertyRepository
parent entity, Hibernate will generate the following SQL queries:
JAVA
IntegerProperty ageProperty = new IntegerProperty ();
ageProperty.setId( 1L );
ageProperty.setName( "age" );
ageProperty.setValue( 23 );
session.persist( ageProperty );
session.persist( nameProperty );
propertyRepository.getProperties().add( ageProperty );
propertyRepository.getProperties().add( nameProperty );
session.persist( propertyRepository );
SQL
INSERT INTO integer_property
( "name", "value", id )
VALUES ( 'age', 23, 1 )
When fetching the PropertyRepository entity and navigating its properties association, Hibernate will fetch the associated
IntegerProperty and StringProperty entities like this:
JAVA
PropertyRepository propertyRepository = session.get( PropertyRepository .class , 1L );
assertEquals(2, propertyRepository.getProperties().size());
SQL
JAVA
@Entity(name = "User")
@Table(name = "users")
public static class User {
@Id
private Long id;
@ManyToOne
@JoinFormula( "REGEXP_REPLACE(phoneNumber, '\\+(\\d+)-.*', '\\1')::int" )
private Country country;
@Entity(name = "Country")
@Table(name = "countries")
public static class Country {
@Id
private Integer id;
//Getters and setters, equals and hashCode methods omitted for brevity
SQL
The country association in the User entity is mapped by the country identifier provided by the phoneNumber property.
JAVA
Country US = new Country ();
US.setId( 1 );
US.setName( "United States" );
When fetching the User entities, the country property is mapped by the @JoinFormula expression:
JAVA
SQL
-- Fetch User entities
SELECT
u.id as id1_1_0_,
u.firstName as firstNam2_1_0_,
u.lastName as lastName3_1_0_,
u.phoneNumber as phoneNum4_1_0_,
REGEXP_REPLACE(u.phoneNumber, '\+(\d+)-.*', '\1')::int as formula1_0_,
c.id as id1_0_1_,
c.name as name2_0_1_
FROM
users u
LEFT OUTER JOIN
countries c
ON REGEXP_REPLACE(u.phoneNumber, '\+(\d+)-.*', '\1')::int = c.id
WHERE
u.id=?
SELECT
u.id as id1_1_0_,
u.firstName as firstNam2_1_0_,
u.lastName as lastName3_1_0_,
u.phoneNumber as phoneNum4_1_0_,
REGEXP_REPLACE(u.phoneNumber, '\+(\d+)-.*', '\1')::int as formula1_0_,
c.id as id1_0_1_,
c.name as name2_0_1_
FROM
users u
LEFT OUTER JOIN
countries c
ON REGEXP_REPLACE(u.phoneNumber, '\+(\d+)-.*', '\1')::int = c.id
WHERE
u.id=?
Therefore, the @JoinFormula annotation is used to define a custom join association between the parent-child association.
JAVA
@Entity(name = "User")
@Table(name = "users")
public static class User {
@Id
private Long id;
@ManyToOne
@JoinColumnOrFormula( column =
@JoinColumn(
name = "language",
referencedColumnName = "primaryLanguage",
insertable = false ,
updatable = false
)
)
@JoinColumnOrFormula( formula =
@JoinFormula(
value = "true",
referencedColumnName = "is_default"
)
)
private Country country;
@Entity(name = "Country")
@Table(name = "countries")
public static class Country implements Serializable {
@Id
private Integer id;
@Column(name = "is_default")
private boolean _default;
//Getters and setters, equals and hashCode methods omitted for brevity
SQL
The country association in the User entity is mapped by the language property value and the associated Country
is_default column value.
JAVA
Country US = new Country ();
US.setId( 1 );
US.setDefault( true );
US.setPrimaryLanguage( "English" );
US.setName( "United States" );
} );
When fetching the User entities, the country property is mapped by the @JoinColumnOrFormula expression:
JAVA
doInJPA( this::entityManagerFactory, entityManager -> {
log.info( "Fetch User entities" );
SQL
SELECT
u.id as id1_1_0_,
u.language as language3_1_0_,
u.firstName as firstNam2_1_0_,
u.lastName as lastName4_1_0_,
1 as formula1_0_,
c.id as id1_0_1_,
c.is_default as is_defau2_0_1_,
c.name as name3_0_1_,
c.primaryLanguage as primaryL4_0_1_
FROM
users u
LEFT OUTER JOIN
countries c
ON u.language = c.primaryLanguage
AND 1 = c.is_default
WHERE
u.id = ?
SELECT
u.id as id1_1_0_,
u.language as language3_1_0_,
u.firstName as firstNam2_1_0_,
u.lastName as lastName4_1_0_,
1 as formula1_0_,
c.id as id1_0_1_,
c.is_default as is_defau2_0_1_,
c.name as name3_0_1_,
c.primaryLanguage as primaryL4_0_1_
FROM
users u
LEFT OUTER JOIN
countries c
ON u.language = c.primaryLanguage
AND 1 = c.is_default
WHERE
u.id = ?
Therefore, the @JoinColumnOrFormula annotation is used to define a custom join association between the parent-child
association.
(http://docs.oracle.com/javaee/7/api/javax/persistence/ManyToOne.html), @OneToOne
(http://docs.oracle.com/javaee/7/api/javax/persistence/OneToOne.html), @OneToMany
(http://docs.oracle.com/javaee/7/api/javax/persistence/OneToMany.html), and @ManyToMany
(http://docs.oracle.com/javaee/7/api/javax/persistence/ManyToMany.html) feature a targetEntity
(http://docs.oracle.com/javaee/7/api/javax/persistence/ManyToOne.html#targetEntity--) attribute to specify the actual class of the entity
association when an interface is used for the mapping.
However, for simple embeddable types, there is no such construct and so you need to use the Hibernate-specific @Target
annotation instead.
JAVA
@Embeddable
public static class GPS implements Coordinates {
private GPS() {
}
@Override
public double x() {
return latitude;
}
@Override
public double y() {
return longitude;
}
}
@Entity(name = "City")
public static class City {
@Id
@GeneratedValue
private Long id;
@Embedded
@Target( GPS.class )
private Coordinates coordinates;
The coordinates embeddable type is mapped as the Coordinates interface. However, Hibernate needs to know the actual
implementation tye, which is GPS in this case, hence the @Target annotation is used to provide this information.
JAVA
entityManager.persist( cluj );
} );
When fetching the City entity, the coordinates property is mapped by the @Target expression:
JAVA
doInJPA( this::entityManagerFactory, entityManager -> {
SQL
SELECT
c.id as id1_0_0_,
c.latitude as latitude2_0_0_,
c.longitude as longitud3_0_0_,
c.name as name4_0_0_
FROM
City c
WHERE
c.id = ?
Therefore, the @Target annotation is used to define a custom join association between the parent-child association.
JAVA
@Embeddable
public static class GPS {
@Parent
private City city;
@Entity(name = "City")
public static class City {
@Id
@GeneratedValue
private Long id;
@Embedded
@Target( GPS.class )
private GPS coordinates;
JAVA
doInJPA( this::entityManagerFactory, entityManager -> {
entityManager.persist( cluj );
} );
When fetching the City entity, the city property of the embeddable type acts as a back reference to the owning parent entity:
JAVA
doInJPA( this::entityManagerFactory, entityManager -> {
Therefore, the @Parent annotation is used to define the association between an embeddable type and the owning entity.
For example, we might have a Publisher class that is a composition of name and country , or a Location class that is a
composition of country and city .
Throughout this chapter and thereafter, for brevity sake, embeddable types may also be referred to as
embeddable.
JAVA
@Embeddable
public static class Publisher {
private Publisher () {}
@Embeddable
public static class Location {
private Location () {}
An embeddable type is another form of a value type, and its lifecycle is bound to a parent entity type, therefore inheriting the
attribute access from its parent (for details on attribute access, see Access strategies).
Embeddable types can be made up of basic values as well as associations, with the caveat that, when used as collection elements,
they cannot define collections themselves.
JAVA
@Entity(name = "Book")
public static class Book {
@Id
@GeneratedValue
private Long id;
@Embeddable
public static class Publisher {
@Column(name = "publisher_name")
private String name;
@Column(name = "publisher_country")
private String country;
//Getters and setters, equals and hashCode methods omitted for brevity
SQL
create table Book (
id bigint not null,
author varchar(255),
publisher_country varchar(255),
publisher_name varchar(255),
title varchar(255),
primary key (id)
)
JPA defines two terms for working with an embeddable type: @Embeddable and @Embedded .
So, the embeddable type is represented by the Publisher class and the parent entity makes use of it through the
The composed values are mapped to the same table as the parent table. Composition is part of good object-oriented data modeling
(idiomatic Java). In fact, that table could also be mapped by the following entity type instead.
JAVA
@Entity(name = "Book")
public static class Book {
@Id
@GeneratedValue
private Long id;
@Column(name = "publisher_name")
private String publisherName;
@Column(name = "publisher_country")
private String publisherCountry;
The composition form is certainly more object-oriented, and that becomes more evident as we work with multiple embeddable
types.
This requirement is due to how object properties are mapped to database columns. By default, JPA expects a database column
having the same name with its associated object property. When including multiple embeddables, the implicit name-based
mapping rule doesn’t work anymore because multiple object properties could end-up being mapped to the same database
column.
If an Embeddable type is used multiple times in some entity, you need to use the @AttributeOverride
(http://docs.oracle.com/javaee/7/api/javax/persistence/AttributeOverride.html) and @AssociationOverride
(http://docs.oracle.com/javaee/7/api/javax/persistence/AssociationOverride.html) annotations to override the default column names defined
by the Embeddable.
Considering you have the following Publisher embeddable type which defines a @ManyToOne association with the Country
entity:
JAVA
@Embeddable
public static class Publisher {
//Getters and setters, equals and hashCode methods omitted for brevity
@Entity(name = "Country")
public static class Country {
@Id
@GeneratedValue
private Long id;
@NaturalId
private String name;
SQL
create table Country (
id bigint not null,
name varchar(255),
primary key (id)
)
Now, if you have a Book entity which declares two Publisher embeddable types for the ebook and paperback version, you
cannot use the default Publisher embeddable mapping since there will be a conflict between the two embeddable column
mappings.
Therefore, the Book entity needs to override the embeddable type mappings for each Publisher attribute:
JAVA
@Entity(name = "Book")
@AttributeOverrides({
@AttributeOverride(
name = "ebookPublisher.name",
column = @Column(name = "ebook_publisher_name")
),
@AttributeOverride(
name = "paperBackPublisher.name",
column = @Column(name = "paper_back_publisher_name")
)
})
@AssociationOverrides({
@AssociationOverride(
name = "ebookPublisher.country",
joinColumns = @JoinColumn(name = "ebook_publisher_country_id")
),
@AssociationOverride(
name = "paperBackPublisher.country",
joinColumns = @JoinColumn(name = "paper_back_publisher_country_id")
)
})
public static class Book {
@Id
@GeneratedValue
private Long id;
SQL
create table Book (
id bigint not null,
author varchar(255),
ebook_publisher_name varchar(255),
paper_back_publisher_name varchar(255),
title varchar(255),
ebook_publisher_country_id bigint,
paper_back_publisher_country_id bigint,
primary key (id)
)
This is a Hibernate specific feature. Users concerned with JPA provider portability should instead prefer explicit
column naming with @AttributeOverride .
Hibernate naming strategies are covered in detail in Naming. However, for the purposes of this discussion, Hibernate has the
capability to interpret implicit column names in a way that is safe for use with multiple embeddable types.
JAVA
@Entity(name = "Book")
public static class Book {
@Id
@GeneratedValue
private Long id;
@Embeddable
public static class Publisher {
//Getters and setters, equals and hashCode methods omitted for brevity
}
@Entity(name = "Country")
public static class Country {
@Id
@GeneratedValue
private Long id;
@NaturalId
private String name;
Example 129. Enabling implicit embeddable type mapping using the component path naming strategy
JAVA
metadataBuilder.applyImplicitNamingStrategy(
ImplicitNamingStrategyComponentPathImpl .INSTANCE
);
Now the "path" to attributes are used in the implicit column naming:
SQL
create table Book (
id bigint not null,
author varchar(255),
ebookPublisher_name varchar(255),
paperBackPublisher_name varchar(255),
title varchar(255),
ebookPublisher_country_id bigint,
paperBackPublisher_country_id bigint,
primary key (id)
)
You could even develop your own naming strategy to do other types of implicit naming strategies.
Embeddable types that are used as collection entries, map keys or entity type identifiers cannot
include their own collection mappings.
Throughout this chapter and thereafter, entity types will be simply referred to as entity.
The entity class must be annotated with the javax.persistence.Entity annotation (or be denoted as such in XML mapping)
The entity class must have a public or protected no-argument constructor. It may define additional constructors as well.
The entity class must not be final. No methods or persistent instance variables of the entity class may be final.
If an entity instance is to be used remotely as a detached object, the entity class must implement the Serializable interface.
Both abstract and concrete classes can be entities. Entities may extend non-entity classes as well as entity classes, and non-
entity classes may extend entity classes.
The persistent state of an entity is represented by instance variables, which may correspond to JavaBean-style properties. An
instance variable must be directly accessed only from within the methods of the entity by the entity instance itself. The state of
the entity is available to clients only through the entity’s accessor methods (getter/setter methods) or other business methods.
Hibernate, however, is not as strict in its requirements. The differences from the list above include:
The entity class must have a no-argument constructor, which may be public, protected or package visibility. It may define
additional constructors as well.
Technically Hibernate can persist final classes or classes with final persistent state accessor (getter/setter) methods. However,
it is generally not a good idea as doing so will stop Hibernate from being able to generate proxies for lazy-loading the entity.
Hibernate does not restrict the application developer from exposing instance variables and reference them from outside the
entity class itself. The validity of such a paradigm, however, is debatable at best.
Starting in 5.0 Hibernate offers a more robust version of bytecode enhancement as another means for
handling lazy loading. Hibernate had some bytecode re-writing capabilities prior to 5.0 but they were
very rudimentary. See the BytecodeEnhancement for additional information on fetching and on
bytecode enhancement.
JPA requires that this constructor be defined as public or protected. Hibernate, for the most part, does not care about the
constructor visibility, as long as the system SecurityManager allows overriding the visibility setting. That said, the constructor
should be defined with at least package visibility if you wish to leverage runtime proxy generation.
The JPA specification requires this, otherwise, the model would prevent accessing the entity persistent state fields directly from
outside the entity itself.
Although Hibernate does not require it, it is recommended to follow the JavaBean conventions and define getters and setters for
entity persistent attributes. Nevertheless, you can still tell Hibernate to directly access the entity fields.
Attributes (whether fields or getters/setters) need not be declared public. Hibernate can deal with attributes declared with the
public, protected, package or private visibility. Again, if wanting to use runtime proxy generation for lazy loading, the getter/setter
should grant access to at least package visibility.
Historically this was considered optional. However, not defining identifier attribute(s) on the entity
should be considered a deprecated feature that will be removed in an upcoming release.
The identifier attribute does not necessarily need to be mapped to the column(s) that physically define the primary key. However,
it should map to column(s) that can uniquely identify each row.
We recommend that you declare consistently-named identifier attributes on persistent classes and
that you use a nullable (i.e., non-primitive) type.
The placement of the @Id annotation marks the persistence state access strategy.
JAVA
@Id
private Long id;
Hibernate offers multiple identifier generation strategies, see the Identifier Generators chapter for more about this topic.
JAVA
@Entity(name = "Book")
public static class Book {
@Id
private Long id;
An entity models a database table. The identifier uniquely identifies each row in that table. By default, the name of the table is
assumed to be the same as the name of the entity. To explicitly give the name of the table or to specify other information about the
table, we would use the javax.persistence.Table annotation.
JAVA
@Entity(name = "Book")
@Table(
catalog = "public",
schema = "store",
name = "book"
)
public static class Book {
@Id
private Long id;
Much of the discussion in this section deals with the relation of an entity to a Hibernate Session,
whether the entity is managed, transient or detached. If you are unfamiliar with these topics, they are
explained in the Persistence Context chapter.
Whether to implement equals() and hashCode() methods in your domain model, let alone how to implement them, is a
surprisingly tricky discussion when it comes to ORM.
There is really just one absolute case: a class that acts as an identifier must implement equals/hashCode based on the id value(s).
Generally, this is pertinent for user-defined classes used as composite identifiers. Beyond this one very specific use case and few
others we will discuss below, you may want to consider not implementing equals/hashCode altogether.
So what’s all the fuss? Normally, most Java objects provide a built-in equals() and hashCode() based on the object’s identity, so
each new object will be different from all others. This is generally what you want in ordinary Java programming. Conceptually
however this starts to break down when you start to think about the possibility of multiple instances of a class representing the
same data.
This is, in fact, exactly the case when dealing with data coming from a database. Every time we load a specific Person from the
database we would naturally get a unique instance. Hibernate, however, works hard to make sure that does not happen within a
given Session . In fact, Hibernate guarantees equivalence of persistent identity (database row) and Java identity inside a
particular session scope. So if we ask a Hibernate Session to load that specific Person multiple times we will actually get back
the same instance:
JAVA
Book book1 = entityManager.find( Book.class , 1L );
Book book2 = entityManager.find( Book.class , 1L );
Consider we have a Library parent entity which contains a java.util.Set of Book entities:
JAVA
@Entity(name = "Library")
public static class Library {
@Id
private Long id;
JAVA
Library library = entityManager.find( Library .class , 1L );
library.getBooks().add( book1 );
library.getBooks().add( book2 );
assertEquals( 1, library.getBooks().size() );
However, the semantic changes when we mix instances loaded from different Sessions:
JAVA
Book book1 = doInJPA( this::entityManagerFactory, entityManager -> {
return entityManager.find( Book.class , 1L );
} );
JAVA
doInJPA( this::entityManagerFactory, entityManager -> {
Set<Book> books = new HashSet <>();
books.add( book1 );
books.add( book2 );
assertEquals( 2, books.size() );
} );
Specifically, the outcome in this last example will depend on whether the Book class implemented equals/hashCode, and, if so,
how.
If the Book class did not override the default equals/hashCode, then the two Book object references are not going to be equal
since their references are different.
JAVA
Library library = entityManager.find( Library .class , 1L );
library.getBooks().add( book1 );
library.getBooks().add( book2 );
assertEquals( 2, library.getBooks().size() );
In cases where you will be dealing with entities outside of a Session (whether they be transient or detached), especially in cases
where you will be using them in Java collections, you should consider implementing equals/hashCode.
A common initial approach is to use the entity’s identifier attribute as the basis for equals/hashCode calculations:
JAVA
@Entity(name = "Library")
public static class Library {
@Id
private Long id;
@Entity(name = "Book")
public static class Book {
@Id
@GeneratedValue
private Long id;
@Override
public boolean equals(Object o) {
if ( this == o ) {
return true;
}
if ( o == null || getClass() != o.getClass() ) {
return false ;
}
Book book = (Book) o;
return Objects .equals( id, book.id );
}
@Override
public int hashCode() {
return Objects .hash( id );
}
}
It turns out that this still breaks when adding transient instance of Book to a set as we saw in the last example:
JAVA
_library.getBooks().add( book1 );
_library.getBooks().add( book2 );
return _library;
} );
The issue here is a conflict between the use of the generated identifier, the contract of Set , and the equals/hashCode
implementations. Set says that the equals/hashCode value for an object should not change while the object is part of the Set .
But that is exactly what happened here because the equals/hasCode are based on the (generated) id, which was not set until the
JPA transaction is committed.
Note that this is just a concern when using generated identifiers. If you are using assigned identifiers this will not be a problem,
assuming the identifier value is assigned prior to adding to the Set .
Another option is to force the identifier to be generated and set prior to adding to the Set :
JAVA
Book book1 = new Book();
book1.setTitle( "High-Performance Java Persistence" );
entityManager.persist( book1 );
entityManager.persist( book2 );
entityManager.flush();
_library.getBooks().add( book1 );
_library.getBooks().add( book2 );
return _library;
} );
The final approach is to use a "better" equals/hashCode implementation, making use of a natural-id or business-key.
JAVA
@Entity(name = "Library")
public static class Library {
@Id
private Long id;
@Entity(name = "Book")
public static class Book {
@Id
@GeneratedValue
private Long id;
@NaturalId
private String isbn;
@Override
public boolean equals(Object o) {
if ( this == o ) {
return true;
}
if ( o == null || getClass() != o.getClass() ) {
return false ;
}
Book book = (Book) o;
return Objects .equals( isbn, book.isbn );
}
@Override
public int hashCode() {
return Objects .hash( isbn );
}
}
This time, when adding a Book to the Library Set , you can retrieve the Book even after it’s being persisted:
JAVA
_library.getBooks().add( book1 );
return _library;
} );
As you can see the question of equals/hashCode is not trivial, nor is there a one-size-fits-all solution.
Although using a natural-id is best for equals and hashCode , sometimes you only have the entity
identifier that provides a unique constraint.
It’s possible to use the entity identifier for equality check, but it needs a workaround:
you need to provide a constant value for hashCode so that the hash code value does not change before and
after the entity is flushed.
you need to compare the entity identifier equality only for non-transient entities.
JAVA
@Entity(name = "Client")
@Table(name = "client")
public static class Client {
@Id
private Long id;
@Column(name = "first_name")
private String firstName;
@Column(name = "last_name")
private String lastName;
@Entity(name = "Account")
@Table(name = "account")
public static class Account {
@Id
private Long id;
@ManyToOne
private Client client;
@Entity(name = "AccountTransaction")
@Table(name = "account_transaction")
public static class AccountTransaction {
@Id
@GeneratedValue
private Long id;
@ManyToOne
private Account account;
@Entity(name = "AccountSummary")
@Subselect(
"select " +
" a.id as id, " +
" concat(concat(c.first_name, ' '), c.last_name) as clientName, " +
" sum(at.cents) as balance " +
"from account a " +
"join client c on c.id = a.client_id " +
"join account_transaction at on a.id = at.account_id " +
"group by a.id, concat(concat(c.first_name, ' '), c.last_name)"
)
@Synchronize( {"client", "account", "account_transaction"} )
public static class AccountSummary {
@Id
private Long id;
In the example above, the Account entity does not retain any balance since every account operation is registered as an
AccountTransaction . To find the Account balance, we need to query the AccountSummary which shares the same identifier
with the Account entity.
However, the AccountSummary is not mapped to a physical table, but to an SQL query.
So, if we have the following AccountTransaction record, the AccountSummary balance will match the proper amount of money
in this Account .
JAVA
doInJPA( this::entityManagerFactory, entityManager -> {
Client client = new Client ();
client.setId( 1L );
client.setFirstName( "John" );
client.setLastName( "Doe" );
entityManager.persist( client );
If we add a new AccountTransaction entity and refresh the AccountSummary entity, the balance is updated accordingly:
JAVA
entityManager.refresh( summary );
assertEquals( 100 * 4800, summary.getBalance() );
} );
The goal of the @Synchronize annotation in the AccountSummary entity mapping is to instruct
Hibernate which database tables are needed by the underlying @Subselect SQL query. This is
because, unlike JPQL and HQL queries, Hibernate cannot parse the underlying native SQL query.
With the @Synchronize annotation in place, when executing an HQL or JPQL which selects from the
AccountSummary entity, Hibernate will trigger a Persistence Context flush if there are pending Account , Client
or AccountTransaction entity state transitions.
However, if the entity class is final, Javassist will not create a proxy and you will get a Pojo even when you only need a proxy
reference. In this case, you could proxy an interface that this particular entity implements, as illustrated by the following
example.
JAVA
Long getId();
@Id
private Long id;
@Override
public Long getId() {
return id;
}
@Override
public void setId(Long id) {
this.id = id;
}
When loading the Book entity proxy, Hibernate is going to proxy the Identifiable interface instead as illustrated by the
following example:
Example 146. Proxying the final entity class implementing the Identifiable interface
JAVA
doInHibernate( this::sessionFactory, session -> {
Book book = new Book();
book.setId( 1L );
book.setTitle( "High-Performance Java Persistence" );
book.setAuthor( "Vlad Mihalcea" );
session.persist( book );
} );
assertTrue(
"Loaded entity is not an instance of the proxy interface",
book instanceof Identifiable
);
assertFalse(
"Proxy class was not created",
book instanceof Book
);
} );
SQL
insert
into
Book
(author, title, id)
values
(?, ?, ?)
As you can see in the associated SQL snippet, Hibernate issues no SQL SELECT query since the proxy can be constructed without
needing to fetch the actual entity Pojo.
In the following entity mapping, both the embeddable and the entity are mapped as interfaces, not Pojos.
JAVA
@Entity
@Tuplizer(impl = DynamicEntityTuplizer .class )
public interface Cuisine {
@Id
@GeneratedValue
Long getId();
void setId(Long id);
String getName();
void setName(String name);
JAVA
@Embeddable
public interface Country {
@Column(name = "CountryName")
String getName();
The @Tuplizer instructs Hibernate to use the DynamicEntityTuplizer and DynamicEmbeddableTuplizer to handle the
associated entity and embeddable object types.
Both the Cuisine entity and the Country embeddable types are going to be instantiated as Java dynamic proxies, as you can see
in the following DynamicInstantiator example:
JAVA
public class DynamicEntityTuplizer extends PojoEntityTuplizer {
public DynamicEntityTuplizer (
EntityMetamodel entityMetamodel,
PersistentClass mappedEntity) {
super ( entityMetamodel, mappedEntity );
}
@Override
protected Instantiator buildInstantiator(
EntityMetamodel entityMetamodel,
PersistentClass persistentClass) {
return new DynamicInstantiator (
persistentClass.getClassName()
);
}
@Override
protected ProxyFactory buildProxyFactory(
PersistentClass persistentClass,
Getter idGetter,
Setter idSetter) {
return super .buildProxyFactory(
persistentClass, idGetter,
idSetter
);
}
}
JAVA
public class DynamicEmbeddableTuplizer
extends PojoComponentTuplizer {
JAVA
JAVA
public class ProxyHelper {
JAVA
public Object invoke(Object proxy, Method method, Object [] args) throws Throwable {
String methodName = method.getName();
if ( methodName.startsWith( "set" ) ) {
String propertyName = methodName.substring( 3 );
data.put( propertyName, args[0] );
}
else if ( methodName.startsWith( "get" ) ) {
String propertyName = methodName.substring( 3 );
return data.get( propertyName );
}
else if ( "toString".equals( methodName ) ) {
return entityName + "#" + data.get( "Id" );
}
else if ( "hashCode".equals( methodName ) ) {
return this.hashCode();
}
return null;
}
With the DynamicInstantiator in place, we can work with the dynamic proxy entities just like with Pojo entities.
JAVA
cuisine.setCountry( country );
session.persist( cuisine );
return cuisine;
} );
doInHibernateSessionBuilder(
() -> sessionFactory()
.withOptions()
.interceptor( new EntityNameInterceptor () ),
session -> {
Cuisine cuisine = session.get( Cuisine .class , _cuisine.getId() );
JAVA
@Entity
@Persister( impl = EntityPersister .class )
public class Author {
@Id
public Integer id;
JAVA
@Entity
@Persister( impl = EntityPersister .class )
public class Book {
@Id
public Integer id;
By providing your own EntityPersister and CollectionPersister implementations, you can control how entities and
collections are persisted in to the database.
Embeddable types inherit the access strategy from their parent entities.
Field-based access
JAVA
@Entity(name = "Book")
public static class Book {
@Id
private Long id;
When using field-based access, adding other entity-level methods is much more flexible because Hibernate won’t consider those
part of the persistence state. To exclude a field from being part of the entity persistent state, the field must be marked with the
@Transient annotation.
Another advantage of using field-based access is that some entity attributes can be hidden from
outside the entity. An example of such attribute is the entity @Version field, which, usually, does not
need to be manipulated by the data access layer. With field-based access, we can simply omit the
getter and the setter for this version field, and Hibernate can still leverage the optimistic concurrency control
mechanism.
Property-based access
JAVA
@Entity(name = "Book")
public static class Book {
@Id
public Long getId() {
return id;
}
When using property-based access, Hibernate uses the accessors for both reading and writing the entity state. Every other
method that will be added to the entity (e.g. helper methods for synchronizing both ends of a bidirectional one-to-many
association) will have to be marked with the @Transient annotation.
JAVA
@Entity(name = "Book")
public static class Book {
@Id
public Long getId() {
return id;
}
The embeddable types can overrule the default implicit access strategy (inherited from the owning entity). In the following
example, the embeddable uses property-based access, no matter what access strategy the owning entity is choosing:
JAVA
@Embeddable
@Access( AccessType .PROPERTY )
public static class Author {
public Author () {
}
The owning entity can use field-based access while the embeddable uses property-based access as it has chosen explicitly:
JAVA
@Entity(name = "Book")
public static class Book {
@Id
private Long id;
@Embedded
private Author author;
JAVA
@Entity(name = "Book")
public static class Book {
@Id
private Long id;
@ElementCollection
@CollectionTable(
name = "book_author",
joinColumns = @JoinColumn(name = "book_id")
)
private List<Author > authors = new ArrayList <>();
2.6. Identifiers
Identifiers model the primary key of an entity. They are used to uniquely identify each specific entity.
Hibernate and JPA both make the following assumptions about the corresponding database column(s):
UNIQUE
NOT NULL
The values cannot be null. For composite ids, no part can be null.
IMMUTABLE
The values, once inserted, can never be changed. This is more a general guide, than a hard-fast rule as opinions vary. JPA
defines the behavior of changing the value of the identifier attribute to be undefined; Hibernate simply does not support that.
In cases where the values for the PK you have chosen will be updated, Hibernate recommends mapping the mutable value as a
natural id, and use a surrogate id for the PK. See Natural Ids.
Technically the identifier does not have to map to the column(s) physically defined as the table primary
key. They just need to map to column(s) that uniquely identify each row. However, this documentation
will continue to use the terms identifier and primary key interchangeably.
Every entity must define an identifier. For entity inheritance hierarchies, the identifier must be defined just on the entity that is
the root of the hierarchy.
According to JPA only the following types should be used as identifier attribute types:
java.lang.String
java.util.Date (TemporalType#DATE)
java.sql.Date
java.math.BigDecimal
java.math.BigInteger
Any types used for identifier attributes beyond this list will not be portable.
Assigned identifiers
Values for simple identifiers can be assigned, as we have seen in the examples above. The expectation for assigned identifier
values is that the application assigns (sets them on the entity attribute) prior to calling save/persist.
JAVA
@Entity(name = "Book")
public static class Book {
@Id
private Long id;
Generated identifiers
Values for simple identifiers can be generated. To denote that an identifier attribute is generated, it is annotated with
javax.persistence.GeneratedValue
JAVA
@Entity(name = "Book")
public static class Book {
@Id
@GeneratedValue
private Long id;
Additionally, to the type restriction list above, JPA says that if using generated identifier values (see below) only integer types
(short, int, long) will be portably supported.
The expectation for generated identifier values is that Hibernate will generate the value when the save/persist occurs.
Identifier value generations strategies are discussed in detail in the Generated identifier values section.
The composite identifier must be represented by a "primary key class". The primary key class may be defined using the
javax.persistence.EmbeddedId annotation (see Composite identifiers with @EmbeddedId ), or defined using the
javax.persistence.IdClass annotation (see Composite identifiers with @IdClass ).
The primary key class must be public and must have a public no-arg constructor.
The primary key class must define equals and hashCode methods, consistent with equality for the underlying database types
to which the primary key is mapped.
The restriction that a composite identifier has to be represented by a "primary key class" is only JPA
specific. Hibernate does allow composite identifiers to be defined without a "primary key class",
although that modeling technique is deprecated and therefore omitted from this discussion.
The attributes making up the composition can be either basic, composite, ManyToOne. Note especially that collections and one-to-
ones are never appropriate.
JAVA
@Entity(name = "SystemUser")
public static class SystemUser {
@EmbeddedId
private PK pk;
@Embeddable
public static class PK implements Serializable {
private PK() {
}
@Override
public boolean equals(Object o) {
if ( this == o ) {
return true;
}
if ( o == null || getClass() != o.getClass() ) {
return false ;
}
PK pk = (PK) o;
return Objects .equals( subsystem, pk.subsystem ) &&
Objects .equals( username, pk.username );
}
@Override
public int hashCode() {
return Objects .hash( subsystem, username );
}
}
JAVA
@Entity(name = "SystemUser")
public static class SystemUser {
@EmbeddedId
private PK pk;
@Entity(name = "Subsystem")
public static class Subsystem {
@Id
private String id;
@Embeddable
public static class PK implements Serializable {
private PK() {
}
@Override
public boolean equals(Object o) {
if ( this == o ) {
return true;
}
if ( o == null || getClass() != o.getClass() ) {
return false ;
}
PK pk = (PK) o;
return Objects .equals( subsystem, pk.subsystem ) &&
Objects .equals( username, pk.username );
}
@Override
public int hashCode() {
return Objects .hash( subsystem, username );
}
}
Hibernate supports directly modeling the ManyToOne in the PK class, whether @EmbeddedId or
@IdClass .
However, that is not portably supported by the JPA specification. In JPA terms one would use "derived
identifiers"; for details, see Derived Identifiers.
JAVA
@Entity(name = "SystemUser")
@IdClass( PK.class )
public static class SystemUser {
@Id
private String subsystem;
@Id
private String username;
public PK getId() {
return new PK(
subsystem,
username
);
}
private PK() {
}
@Override
public boolean equals(Object o) {
if ( this == o ) {
return true;
}
if ( o == null || getClass() != o.getClass() ) {
return false ;
}
PK pk = (PK) o;
return Objects .equals( subsystem, pk.subsystem ) &&
Objects .equals( username, pk.username );
}
@Override
public int hashCode() {
return Objects .hash( subsystem, username );
}
}
Non-aggregated composite identifiers can also contain ManyToOne attributes as we saw with aggregated ones (still non-portably).
JAVA
@Entity(name = "SystemUser")
@IdClass( PK.class )
public static class SystemUser {
@Id
@ManyToOne(fetch = FetchType .LAZY)
private Subsystem subsystem;
@Id
private String username;
@Entity(name = "Subsystem")
public static class Subsystem {
@Id
private String id;
private PK() {
}
With non-aggregated composite identifiers, Hibernate also supports "partial" generation of the composite values.
JAVA
@Entity(name = "SystemUser")
@IdClass( PK.class )
public static class SystemUser {
@Id
private String subsystem;
@Id
private String username;
@Id
@GeneratedValue
private Integer registrationId;
public PK getId() {
return new PK(
subsystem,
username,
registrationId
);
}
private PK() {
}
@Override
public boolean equals(Object o) {
if ( this == o ) {
return true;
}
if ( o == null || getClass() != o.getClass() ) {
return false ;
}
PK pk = (PK) o;
@Override
public int hashCode() {
return Objects .hash( subsystem, username, registrationId );
}
}
This feature exists because of a highly questionable interpretation of the JPA specification made by the
SpecJ committee.
Hibernate does not feel that JPA defines support for this, but added the feature simply to be usable in
SpecJ benchmarks. Use of this feature may or may not be portable from a JPA perspective.
JAVA
@Entity(name = "Book")
public static class Book implements Serializable {
@Id
@ManyToOne(fetch = FetchType .LAZY)
private Author author;
@Id
@ManyToOne(fetch = FetchType .LAZY)
private Publisher publisher;
@Id
private String title;
private Book() {
}
@Override
public boolean equals(Object o) {
if ( this == o ) {
return true;
}
if ( o == null || getClass() != o.getClass() ) {
return false ;
}
Book book = (Book) o;
return Objects .equals( author, book.author ) &&
Objects .equals( publisher, book.publisher ) &&
Objects .equals( title, book.title );
}
@Override
public int hashCode() {
return Objects .hash( author, publisher, title );
}
}
@Entity(name = "Author")
public static class Author implements Serializable {
@Id
private String name;
@Override
public boolean equals(Object o) {
if ( this == o ) {
return true;
}
if ( o == null || getClass() != o.getClass() ) {
return false ;
}
Author author = (Author ) o;
return Objects .equals( name, author.name );
}
@Override
public int hashCode() {
return Objects .hash( name );
}
}
@Entity(name = "Publisher")
public static class Publisher implements Serializable {
@Id
private String name;
@Override
public boolean equals(Object o) {
if ( this == o ) {
return true;
}
if ( o == null || getClass() != o.getClass() ) {
return false ;
}
Publisher publisher = (Publisher ) o;
return Objects .equals( name, publisher.name );
}
@Override
public int hashCode() {
return Objects .hash( name );
}
}
Although the mapping is much simpler than using an @EmbeddedId or an @IdClass , there’s no separation between the entity
instance and the actual identifier. To query this entity, an instance of the entity itself must be supplied to the persistence context.
JAVA
Book book = entityManager.find( Book.class , new Book(
author,
publisher,
"High-Performance Java Persistence"
) );
For discussion of generated values for non-identifier attributes, see Generated properties.
Hibernate supports identifier value generation across a number of different types. Remember that JPA portably defines identifier
value generation just for integer types.
Identifier value generation is indicated using the javax.persistence.GeneratedValue annotation. The most important piece of
information here is the specified javax.persistence.GenerationType which indicates how values will be generated.
The discussions below assume that the application is using Hibernate’s "new generator mappings" as
indicated by the hibernate.id.new_generator_mappings setting or
MetadataBuilder.enableNewIdentifierGeneratorSupport method during bootstrap. Starting with
Hibernate 5, this is set to true by default. If applications set this to false the resolutions discussed here will be
very different. The rest of the discussion here assumes this setting is enabled (true).
Indicates that the persistence provider (Hibernate) should choose an appropriate generation strategy. See Interpreting AUTO.
IDENTITY
Indicates that database IDENTITY columns will be used for primary key value generation. See Using IDENTITY columns.
SEQUENCE
Indicates that database sequence should be used for obtaining primary key values. See Using sequences.
TABLE
Indicates that a database table should be used for obtaining primary key values. See Using the table identifier generator.
The default behavior is to look at the Java type of the identifier attribute.
If the identifier type is numerical (e.g. Long , Integer ), then Hibernate is going to use the IdGeneratorStrategyInterpreter
to resolve the identifier generator strategy. The IdGeneratorStrategyInterpreter has two implementations:
FallbackInterpreter
This is the default strategy since Hibernate 5.0. For older versions, this strategy is enabled through the
hibernate.id.new_generator_mappings configuration property. When using this strategy, AUTO always resolves to
SequenceStyleGenerator . If the underlying database supports sequences, then a SEQUENCE generator is used. Otherwise, a
TABLE generator is going to be used instead.
LegacyFallbackInterpreter
This is a legacy mechanism that was used by Hibernate prior to version 5.0 or when the
hibernate.id.new_generator_mappings configuration property is false. The legacy strategy maps AUTO to the native
generator strategy which uses the Dialect#getNativeIdentifierGeneratorStrategy
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/dialect/Dialect.html#getNativeIdentifierGeneratorStrategy--) to resolve the
actual identifier generator (e.g. identity or sequence ).
The preferred (and portable) way to configure this generator is using the JPA-defined javax.persistence.SequenceGenerator
annotation.
The simplest form is to simply request sequence generation; Hibernate will use a single, implicitly-named sequence
( hibernate_sequence ) for all such unnamed definitions.
JAVA
@Entity(name = "Product")
public static class Product {
@Id
@GeneratedValue(
strategy = GenerationType .SEQUENCE
)
private Long id;
@Column(name = "product_name")
private String name;
JAVA
@Entity(name = "Product")
public static class Product {
@Id
@GeneratedValue(
strategy = GenerationType .SEQUENCE,
generator = "sequence-generator"
)
@SequenceGenerator(
name = "sequence-generator",
sequenceName = "product_sequence"
)
private Long id;
@Column(name = "product_name")
private String name;
JAVA
@Entity(name = "Product")
public static class Product {
@Id
@GeneratedValue(
strategy = GenerationType .SEQUENCE,
generator = "sequence-generator"
)
@SequenceGenerator(
name = "sequence-generator",
sequenceName = "product_sequence",
allocationSize = 5
)
private Long id;
@Column(name = "product_name")
private String name;
If Hibernate believes the JDBC environment supports java.sql.Statement#getGeneratedKeys , then that approach will be
used for extracting the IDENTITY generated keys.
Otherwise, if Dialect#supportsInsertSelectIdentity reports true, Hibernate will use the Dialect specific INSERT+SELECT
statement syntax.
Otherwise, Hibernate will expect that the database supports some form of asking for the most recently inserted IDENTITY
value via a separate SQL command as indicated by Dialect#getIdentitySelectString .
It is important to realize that this imposes a runtime behavior where the entity row must be physically
inserted prior to the identifier value being known. This can mess up extended persistence contexts
(conversations). Because of the runtime imposition/inconsistency, Hibernate suggests other forms of
identifier value generation be used.
There is yet another important runtime impact of choosing IDENTITY generation: Hibernate will not be
able to JDBC batching for inserts of the entities that use IDENTITY generation. The importance of this
depends on the application-specific use cases. If the application is not usually creating many new
instances of a given type of entity that uses IDENTITY generation, then this is not an important impact since
batching would not have been helpful anyway.
The basic idea is that a given table-generator table ( hibernate_sequences for example) can hold multiple segments of identifier
generation values.
JAVA
@Entity(name = "Product")
public static class Product {
@Id
@GeneratedValue(
strategy = GenerationType .TABLE
)
private Long id;
@Column(name = "product_name")
private String name;
SQL
create table hibernate_sequences (
sequence_name varchar2(255 char) not null,
next_val number(19,0),
primary key (sequence_name)
)
However, you can configure the table identifier generator using the @TableGenerator
(http://docs.oracle.com/javaee/7/api/javax/persistence/TableGenerator.html) annotation.
JAVA
@Entity(name = "Product")
public static class Product {
@Id
@GeneratedValue(
strategy = GenerationType .TABLE,
generator = "table-generator"
)
@TableGenerator(
name = "table-generator",
table = "table_identifier",
pkColumnName = "table_name",
valueColumnName = "product_id",
allocationSize = 5
)
private Long id;
@Column(name = "product_name")
private String name;
SQL
create table table_identifier (
table_name varchar2(255 char) not null,
product_id number(19,0),
primary key (table_name)
)
Now, when inserting 3 Product entities, Hibernate generates the following statements:
JAVA
for ( long i = 1; i <= 3; i++ ) {
Product product = new Product ();
product.setName( String .format( "Product %d", i ) );
entityManager.persist( product );
}
SQL
select
tbl.product_id
from
table_identifier tbl
where
tbl.table_name = ?
for update
insert
into
table_identifier
(table_name, product_id)
values
(?, ?)
update
table_identifier
set
product_id= ?
where
product_id= ?
and table_name= ?
select
tbl.product_id
from
table_identifier tbl
where
tbl.table_name= ? for update
update
table_identifier
set
product_id= ?
where
product_id= ?
and table_name= ?
insert
into
Product
(product_name, id)
values
(?, ?)
insert
into
Product
(product_name, id)
values
(?, ?)
insert
into
Product
(product_name, id)
values
(?, ?)
UUIDGenerator supports pluggable strategies for exactly how the UUID is generated. These strategies are defined by the
org.hibernate.id.UUIDGenerationStrategy contract. The default strategy is a version 4 (random) strategy according to IETF
RFC 4122. Hibernate does ship with an alternative strategy which is a RFC 4122 version 1 (time-based) strategy (using IP address
rather than mac address).
JAVA
@Entity(name = "Book")
public static class Book {
@Id
@GeneratedValue
private UUID id;
To specify an alternative generation strategy, we’d have to define some configuration via @GenericGenerator . Here we choose
the RFC 4122 version 1 compliant strategy named org.hibernate.id.uuid.CustomVersionOneStrategy .
JAVA
@Entity(name = "Book")
public static class Book {
@Id
@GeneratedValue( generator = "custom-uuid" )
@GenericGenerator(
name = "custom-uuid",
strategy = "org.hibernate.id.UUIDGenerator",
parameters = {
@Parameter(
name = "uuid_gen_strategy_class",
value = "org.hibernate.id.uuid.CustomVersionOneStrategy"
)
}
)
private UUID id;
2.6.12. Optimizers
Most of the Hibernate generators that separately obtain identifier values from database structures support the use of pluggable
optimizers. Optimizers help manage the number of times Hibernate has to talk to the database in order to generate identifier
values. For example, with no optimizer applied to a sequence-generator, every time the application asked Hibernate to generate
an identifier it would need to grab the next sequence value from the database. But if we can minimize the number of times we
need to communicate with the database here, the application will be able to perform better. Which is, in fact, the role of these
optimizers.
none
No optimization is performed. We communicate with the database each and every time an identifier value is needed from the
generator.
pooled-lo
The pooled-lo optimizer works on the principle that the increment-value is encoded into the database table/sequence structure.
In sequence-terms, this means that the sequence is defined with a greater-than-1 increment size.
For example, consider a brand new sequence defined as create sequence m_sequence start with 1 increment by 20 . This
sequence essentially defines a "pool" of 20 usable id values each and every time we ask it for its next-value. The pooled-lo
optimizer interprets the next-value as the low end of that pool.
So when we first ask it for next-value, we’d get 1. We then assume that the valid pool would be the values from 1-20 inclusive.
The next call to the sequence would result in 21, which would define 21-40 as the valid range. And so on. The "lo" part of the
name indicates that the value from the database table/sequence is interpreted as the pool lo(w) end.
pooled
Just like pooled-lo, except that here the value from the table/sequence is interpreted as the high end of the value pool.
hilo; legacy-hilo
Define a custom algorithm for generating pools of values based on a single value from a table or sequence.
These optimizers are not recommended for use. They are maintained (and mentioned) here simply for use by legacy
applications that used these strategies previously.
Applications can also implement and use their own optimizer strategies, as defined by the
org.hibernate.id.enhanced.Optimizer contract.
To make use of the pooled or pooled-lo optimizers, the entity mapping must use the @GenericGenerator
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/GenericGenerator.html) annotation:
JAVA
@Entity(name = "Product")
public static class Product {
@Id
@GeneratedValue(
strategy = GenerationType .SEQUENCE,
generator = "product_generator"
)
@GenericGenerator(
name = "product_generator",
strategy = "org.hibernate.id.enhanced.SequenceStyleGenerator",
parameters = {
@Parameter(name = "sequence_name", value = "product_sequence"),
@Parameter(name = "initial_value", value = "1"),
@Parameter(name = "increment_size", value = "3"),
@Parameter(name = "optimizer", value = "pooled-lo")
}
)
private Long id;
@Column(name = "p_name")
private String name;
@Column(name = "p_number")
private String number;
Now, when saving 5 Person entities and flushing the Persistence Context after every 3 entities:
JAVA
SQL
CALL NEXT VALUE FOR product_sequence
As you can see from the list of generated SQL statements, you can insert 3 entities with just one database sequence call. This way,
the pooled and the pooled-lo optimizers allow you to reduce the number of database roundtrips, therefore reducing the overall
transaction response time.
JAVA
@Entity(name = "Person")
public static class Person {
@Id
private Long id;
@NaturalId
private String registrationNumber;
public Person () {}
@Entity(name = "PersonDetails")
public static class PersonDetails {
@Id
private Long id;
@OneToOne
@MapsId
private Person person;
In the example above, the PersonDetails entity uses the id column for both the entity identifier and for the one-to-one
association to the Person entity. The value of the PersonDetails entity identifier is "derived" from the identifier of its parent
Person entity.
JAVA
doInJPA( this::entityManagerFactory, entityManager -> {
Person person = new Person ( "ABC-123" );
person.setId( 1L );
entityManager.persist( person );
entityManager.persist( personDetails );
} );
The @MapsId annotation can also reference columns from an @EmbeddedId identifier as well.
JAVA
@Entity(name = "Person")
public static class Person {
@Id
private Long id;
@NaturalId
private String registrationNumber;
public Person () {}
@Entity(name = "PersonDetails")
public static class PersonDetails {
@Id
private Long id;
@OneToOne
@PrimaryKeyJoinColumn
private Person person;
Unlike @MapsId , the application developer is responsible for ensuring that the identifier and the many-
to-one (or one-to-one) association are in sync as you can see in the PersonDetails#setPerson
method.
2.6.15. @RowId
If you annotate a given entity with the @RowId annotation and the underlying database supports fetching a record by ROWID
(e.g. Oracle), then Hibernate can use the ROWID pseudo-column for CRUD operations.
JAVA
@Entity(name = "Product")
@RowId("ROWID")
public static class Product {
@Id
private Long id;
@Column(name = "`name`")
private String name;
@Column(name = "`number`")
private String number;
Now, when fetching an entity and modifying it, Hibernate uses the ROWID pseudo-column for the UPDATE SQL statement.
JAVA
Product product = entityManager.find( Product .class , 1L );
SQL
SELECT
p.id as id1_0_0_,
p."name" as name2_0_0_,
p."number" as number3_0_0_,
p.ROWID as rowid_0_
FROM
Product p
WHERE
p.id = ?
UPDATE
Product
SET
"name" = ?,
"number" = ?
WHERE
ROWID = ?
2.7. Associations
Associations describe how two or more entities form a relationship based on a database joining semantics.
2.7.1. @ManyToOne
@ManyToOne is the most common association, having a direct equivalent in the relational database as well (e.g. foreign key), and
so it establishes a relationship between a child entity and a parent.
JAVA
@Entity(name = "Person")
public static class Person {
@Id
@GeneratedValue
private Long id;
@Entity(name = "Phone")
public static class Phone {
@Id
@GeneratedValue
private Long id;
@Column(name = "`number`")
private String number;
@ManyToOne
@JoinColumn(name = "person_id",
foreignKey = @ForeignKey(name = "PERSON_ID_FK")
)
private Person person;
SQL
CREATE TABLE Person (
id BIGINT NOT NULL ,
PRIMARY KEY ( id )
)
Each entity has a lifecycle of its own. Once the @ManyToOne association is set, Hibernate will set the associated database foreign
key column.
JAVA
entityManager.flush();
phone.setPerson( null );
SQL
INSERT INTO Person ( id )
VALUES ( 1 )
UPDATE Phone
SET number = '123-456-7890',
person_id = NULL
WHERE id = 2
2.7.2. @OneToMany
The @OneToMany association links a parent entity with one or more child entities. If the @OneToMany doesn’t have a mirroring
@ManyToOne association on the child side, the @OneToMany association is unidirectional. If there is a @ManyToOne association on
the child side, the @OneToMany association is bidirectional and the application developer can navigate this relationship from both
ends.
Unidirectional @OneToMany
When using a unidirectional @OneToMany association, Hibernate resorts to using a link table between the two joining entities.
JAVA
@Entity(name = "Person")
public static class Person {
@Id
@GeneratedValue
private Long id;
@Entity(name = "Phone")
public static class Phone {
@Id
@GeneratedValue
private Long id;
@Column(name = "`number`")
private String number;
SQL
CREATE TABLE Person (
id BIGINT NOT NULL ,
PRIMARY KEY ( id )
)
JAVA
Person person = new Person ();
Phone phone1 = new Phone ( "123-456-7890" );
Phone phone2 = new Phone ( "321-654-0987" );
person.getPhones().add( phone1 );
person.getPhones().add( phone2 );
entityManager.persist( person );
entityManager.flush();
person.getPhones().remove( phone1 );
SQL
INSERT INTO Person
( id )
VALUES ( 1 )
When persisting the Person entity, the cascade will propagate the persist operation to the underlying Phone children as well.
Upon removing a Phone from the phones collection, the association row is deleted from the link table, and the orphanRemoval
attribute will trigger a Phone removal as well.
The unidirectional associations are not very efficient when it comes to removing child entities. In this
particular example, upon flushing the persistence context, Hibernate deletes all database child entries and
reinserts the ones that are still found in the in-memory persistence context.
On the other hand, a bidirectional @OneToMany association is much more efficient because the child entity
controls the association.
Bidirectional @OneToMany
The bidirectional @OneToMany association also requires a @ManyToOne association on the child side. Although the Domain Model
exposes two sides to navigate this association, behind the scenes, the relational database has only one foreign key for this
relationship.
Every bidirectional association must have one owning side only (the child side), the other one being referred to as the inverse (or
the mappedBy ) side.
JAVA
@Entity(name = "Person")
public static class Person {
@Id
@GeneratedValue
private Long id;
@Entity(name = "Phone")
public static class Phone {
@Id
@GeneratedValue
private Long id;
@NaturalId
@Column(name = "`number`", unique = true)
private String number;
@ManyToOne
private Person person;
@Override
public boolean equals(Object o) {
if ( this == o ) {
return true;
}
if ( o == null || getClass() != o.getClass() ) {
return false ;
}
Phone phone = (Phone ) o;
return Objects .equals( number, phone.number );
}
@Override
public int hashCode() {
return Objects .hash( number );
}
}
SQL
Whenever a bidirectional association is formed, the application developer must make sure both sides
are in-sync at all times. The addPhone() and removePhone() are utility methods that synchronize both
ends whenever a child element is added or removed.
Because the Phone class has a @NaturalId column (the phone number being unique), the equals() and the hashCode() can
make use of this property, and so the removePhone() logic is reduced to the remove() Java Collection method.
JAVA
Person person = new Person ();
Phone phone1 = new Phone ( "123-456-7890" );
Phone phone2 = new Phone ( "321-654-0987" );
person.addPhone( phone1 );
person.addPhone( phone2 );
entityManager.persist( person );
entityManager.flush();
person.removePhone( phone1 );
SQL
Unlike the unidirectional @OneToMany , the bidirectional association is much more efficient when managing the collection
persistence state. Every element removal only requires a single update (in which the foreign key column is set to NULL ), and, if
the child entity lifecycle is bound to its owning parent so that the child cannot exist without its parent, then we can annotate the
association with the orphan-removal attribute and dissociate the child will trigger a delete statement on the actual child table
row as well.
2.7.3. @OneToOne
The @OneToOne association can either be unidirectional or bidirectional. A unidirectional association follows the relational
database foreign key semantics, the client-side owning the relationship. A bidirectional association features a mappedBy
@OneToOne parent side too.
Unidirectional @OneToOne
JAVA
@Entity(name = "Phone")
public static class Phone {
@Id
@GeneratedValue
private Long id;
@Column(name = "`number`")
private String number;
@OneToOne
@JoinColumn(name = "details_id")
private PhoneDetails details;
@Entity(name = "PhoneDetails")
public static class PhoneDetails {
@Id
@GeneratedValue
private Long id;
SQL
CREATE TABLE Phone (
id BIGINT NOT NULL ,
number VARCHAR(255) ,
details_id BIGINT ,
PRIMARY KEY ( id )
)
From a relational database point of view, the underlying schema is identical to the unidirectional @ManyToOne association, as the
client-side controls the relationship based on the foreign key column.
But then, it’s unusual to consider the Phone as a client-side and the PhoneDetails as the parent-side because the details cannot
exist without an actual phone. A much more natural mapping would be if the Phone were the parent-side, therefore pushing the
foreign key into the PhoneDetails table. This mapping requires a bidirectional @OneToOne association as you can see in the
following example:
Bidirectional @OneToOne
JAVA
@Entity(name = "Phone")
public static class Phone {
@Id
@GeneratedValue
private Long id;
@Column(name = "`number`")
private String number;
@OneToOne(
mappedBy = "phone",
cascade = CascadeType .ALL,
orphanRemoval = true,
fetch = FetchType .LAZY
)
private PhoneDetails details;
@Entity(name = "PhoneDetails")
public static class PhoneDetails {
@Id
@GeneratedValue
private Long id;
SQL
This time, the PhoneDetails owns the association, and, like any bidirectional association, the parent-side can propagate its
lifecycle to the child-side through cascading.
JAVA
Phone phone = new Phone ( "123-456-7890" );
PhoneDetails details = new PhoneDetails ( "T-Mobile", "GSM" );
phone.addDetails( details );
entityManager.persist( phone );
SQL
INSERT INTO Phone ( number, id )
VALUES ( '123-456-7890', 1 )
When using a bidirectional @OneToOne association, Hibernate enforces the unique constraint upon fetching the child-side. If
there are more than one children associated with the same parent, Hibernate will throw a
org.hibernate.exception.ConstraintViolationException . Continuing the previous example, when adding another
PhoneDetails , Hibernate validates the uniqueness constraint when reloading the Phone object.
JAVA
PhoneDetails otherDetails = new PhoneDetails ( "T-Mobile", "CDMA" );
otherDetails.setPhone( phone );
entityManager.persist( otherDetails );
entityManager.flush();
entityManager.clear();
//throws javax.persistence.PersistenceException: org.hibernate.HibernateException: More than one row with the given
identifier was found: 1
phone = entityManager.find( Phone .class , phone.getId() );
Although you might annotate the parent-side association to be fetched lazily, Hibernate cannot honor this request since it cannot
know whether the association is null or not.
The only way to figure out whether there is an associated record on the child side is to fetch the child association using a
secondary query. Because this can lead to N+1 query issues, it’s much more efficient to use unidirectional @OneToOne
associations with the @MapsId annotation in place.
However, if you really need to use a bidirectional association and want to make sure that this is always going to be fetched lazily,
then you need to enable lazy state initialization bytecode enhancement and use the @LazyToOne
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/LazyToOne.html) annotation as well.
JAVA
@Entity(name = "Phone")
public static class Phone {
@Id
@GeneratedValue
private Long id;
@Column(name = "`number`")
private String number;
@OneToOne(
mappedBy = "phone",
cascade = CascadeType .ALL,
orphanRemoval = true,
fetch = FetchType .LAZY
)
@LazyToOne( LazyToOneOption .NO_PROXY )
private PhoneDetails details;
@Entity(name = "PhoneDetails")
public static class PhoneDetails {
@Id
@GeneratedValue
private Long id;
For more about how to enable Bytecode enhancement, see the BytecodeEnhancement chapter.
2.7.4. @ManyToMany
The @ManyToMany association requires a link table that joins two entities. Like the @OneToMany association, @ManyToMany can
be either unidirectional or bidirectional.
Unidirectional @ManyToMany
JAVA
@Entity(name = "Person")
public static class Person {
@Id
@GeneratedValue
private Long id;
@Entity(name = "Address")
public static class Address {
@Id
@GeneratedValue
private Long id;
@Column(name = "`number`")
private String number;
SQL
CREATE TABLE Address (
id BIGINT NOT NULL ,
number VARCHAR(255) ,
street VARCHAR(255) ,
PRIMARY KEY ( id )
)
Just like with unidirectional @OneToMany associations, the link table is controlled by the owning side.
When an entity is removed from the @ManyToMany collection, Hibernate simply deletes the joining record in the link table.
Unfortunately, this operation requires removing all entries associated with a given parent and recreating the ones that are listed
in the current running persistent context.
JAVA
Person person1 = new Person ();
Person person2 = new Person ();
person1.getAddresses().add( address1 );
person1.getAddresses().add( address2 );
person2.getAddresses().add( address1 );
entityManager.persist( person1 );
entityManager.persist( person2 );
entityManager.flush();
person1.getAddresses().remove( address1 );
SQL
INSERT INTO Person ( id )
VALUES ( 1 )
For @ManyToMany associations, the REMOVE entity state transition doesn’t make sense to be cascaded
because it will propagate beyond the link table. Since the other side might be referenced by other
entities on the parent-side, the automatic removal might end up in a ConstraintViolationException .
For example, if @ManyToMany(cascade = CascadeType.ALL) was defined and the first person would be deleted,
Hibernate would throw an exception because another person is still associated with the address that’s being
deleted.
JAVA
By simply removing the parent-side, Hibernate can safely remove the associated link records as you can see in the following
example:
JAVA
Person person1 = entityManager.find( Person .class , personId );
entityManager.remove( person1 );
SQL
DELETE FROM Person_Address
WHERE Person_id = 1
Bidirectional @ManyToMany
A bidirectional @ManyToMany association has an owning and a mappedBy side. To preserve synchronicity between both sides, it’s
good practice to provide helper methods for adding or removing child entities.
JAVA
@Entity(name = "Person")
public static class Person {
@Id
@GeneratedValue
private Long id;
@NaturalId
private String registrationNumber;
@Override
public boolean equals(Object o) {
if ( this == o ) {
return true;
}
if ( o == null || getClass() != o.getClass() ) {
return false ;
}
Person person = (Person ) o;
return Objects .equals( registrationNumber, person.registrationNumber );
}
@Override
public int hashCode() {
return Objects .hash( registrationNumber );
}
}
@Entity(name = "Address")
public static class Address {
@Id
@GeneratedValue
private Long id;
@Column(name = "`number`")
private String number;
@ManyToMany(mappedBy = "addresses")
private List<Person > owners = new ArrayList <>();
@Override
public boolean equals(Object o) {
if ( this == o ) {
return true;
}
if ( o == null || getClass() != o.getClass() ) {
return false ;
}
Address address = (Address ) o;
return Objects .equals( street, address.street ) &&
Objects .equals( number, address.number ) &&
Objects .equals( postalCode, address.postalCode );
}
@Override
public int hashCode() {
return Objects .hash( street, number, postalCode );
}
}
SQL
CREATE TABLE Address (
id BIGINT NOT NULL ,
number VARCHAR(255) ,
postalCode VARCHAR(255) ,
street VARCHAR(255) ,
PRIMARY KEY ( id )
)
With the helper methods in place, the synchronicity management can be simplified, as you can see in the following example:
JAVA
person1.addAddress( address1 );
person1.addAddress( address2 );
person2.addAddress( address1 );
entityManager.persist( person1 );
entityManager.persist( person2 );
entityManager.flush();
person1.removeAddress( address1 );
SQL
INSERT INTO Person ( registrationNumber, id )
VALUES ( 'ABC-123', 1 )
If a bidirectional @OneToMany association performs better when removing or changing the order of child elements, the
@ManyToMany relationship cannot benefit from such an optimization because the foreign key side is not in control. To overcome
this limitation, the link table must be directly exposed and the @ManyToMany association split into two bidirectional @OneToMany
relationships.
JAVA
@Entity(name = "Person")
public static class Person implements Serializable {
@Id
@GeneratedValue
private Long id;
@NaturalId
private String registrationNumber;
@OneToMany(
mappedBy = "person",
cascade = CascadeType .ALL,
orphanRemoval = true
)
private List<PersonAddress > addresses = new ArrayList <>();
@Override
public boolean equals(Object o) {
if ( this == o ) {
return true;
}
if ( o == null || getClass() != o.getClass() ) {
return false ;
}
Person person = (Person ) o;
return Objects .equals( registrationNumber, person.registrationNumber );
}
@Override
public int hashCode() {
return Objects .hash( registrationNumber );
}
}
@Entity(name = "PersonAddress")
public static class PersonAddress implements Serializable {
@Id
@ManyToOne
private Person person;
@Id
@ManyToOne
private Address address;
@Override
public boolean equals(Object o) {
if ( this == o ) {
return true;
}
if ( o == null || getClass() != o.getClass() ) {
return false ;
}
PersonAddress that = (PersonAddress ) o;
return Objects .equals( person, that.person ) &&
Objects .equals( address, that.address );
}
@Override
public int hashCode() {
return Objects .hash( person, address );
}
}
@Entity(name = "Address")
public static class Address implements Serializable {
@Id
@GeneratedValue
private Long id;
@Column(name = "`number`")
private String number;
@OneToMany(
mappedBy = "address",
cascade = CascadeType .ALL,
orphanRemoval = true
)
private List<PersonAddress > owners = new ArrayList <>();
@Override
public boolean equals(Object o) {
if ( this == o ) {
return true;
}
if ( o == null || getClass() != o.getClass() ) {
return false ;
}
Address address = (Address ) o;
return Objects .equals( street, address.street ) &&
Objects .equals( number, address.number ) &&
Objects .equals( postalCode, address.postalCode );
}
@Override
public int hashCode() {
return Objects .hash( street, number, postalCode );
}
}
SQL
Both the Person and the Address have a mappedBy @OneToMany side, while the PersonAddress owns the person and the
address @ManyToOne associations. Because this mapping is formed out of two bidirectional associations, the helper methods are
even more relevant.
The aforementioned example uses a Hibernate specific mapping for the link entity since JPA doesn’t
allow building a composite identifier out of multiple @ManyToOne associations. For more details, see the
Composite identifiers - associations section.
The entity state transitions are better managed than in the previous bidirectional @ManyToMany case.
JAVA
entityManager.persist( person1 );
entityManager.persist( person2 );
entityManager.persist( address1 );
entityManager.persist( address2 );
person1.addAddress( address1 );
person1.addAddress( address2 );
person2.addAddress( address1 );
entityManager.flush();
SQL
INSERT INTO Person ( registrationNumber, id )
VALUES ( 'ABC-123', 1 )
There is only one delete statement executed because, this time, the association is controlled by the @ManyToOne side which only
has to monitor the state of the underlying foreign key relationship to trigger the right DML statement.
By default, Hibernate will complain whenever a child association references a non-existing parent record. However, you can
configure this behavior so that Hibernate can ignore such an Exception and simply assign null as a parent object referenced.
To ignore non-existing parent entity references, even though not really recommended, it’s possible to use the annotation
org.hibernate.annotation.NotFound annotation with a value of org.hibernate.annotations.NotFoundAction.IGNORE .
JAVA
@Entity
@Table( name = "Person" )
public static class Person {
@Id
private Long id;
@Entity
@Table( name = "City" )
public static class City implements Serializable {
@Id
@GeneratedValue
private Long id;
JAVA
City _NewYork = new City();
_NewYork .setName( "New York" );
entityManager.persist( _NewYork );
When loading the Person entity, Hibernate is able to locate the associated City parent entity:
JAVA
Person person = entityManager.find( Person .class , 1L );
assertEquals( "New York", person.getCity().getName() );
JAVA
person.setCityName( "Atlantis" );
Hibernate is not going to throw any exception, and it will assign a value of null for the non-existing City entity reference:
JAVA
Person person = entityManager.find( Person .class , 1L );
2.8. Collections
Naturally Hibernate also allows persisting collections. These persistent collections can contain almost any other Hibernate type,
including basic types, custom types, embeddables, and references to other entities. In this context, the distinction between value
and reference semantics is very important. An object in a collection might be handled with value semantics (its lifecycle being
fully dependant on the collection owner), or it might be a reference to another entity with its own lifecycle. In the latter case, only
the link between the two objects is considered to be a state held by the collection.
The owner of the collection is always an entity, even if the collection is defined by an embeddable type. Collections form
one/many-to-many associations between types so there can be:
entity collections
Hibernate uses its own collection implementations which are enriched with lazy-loading, caching or state change detection
semantics. For this reason, persistent collections must be declared as an interface type. The actual interface might be
java.util.Collection , java.util.List , java.util.Set , java.util.Map , java.util.SortedSet ,
java.util.SortedMap or even other object types (meaning you will have to write an implementation of
org.hibernate.usertype.UserCollectionType ).
As the following example demonstrates, it’s important to use the interface type and not the collection implementation, as declared
in the entity mapping.
JAVA
@Entity(name = "Person")
public static class Person {
@Id
private Long id;
@ElementCollection
private List<String > phones = new ArrayList <>();
It is important that collections be defined using the appropriate Java Collections Framework interface
rather than a specific implementation. From a theoretical perspective, this just follows good design
principles. From a practical perspective, Hibernate (like other persistence providers) will use their own
collection implementations which conform to the Java Collections Framework interfaces.
The persistent collections injected by Hibernate behave like ArrayList , HashSet , TreeSet , HashMap or TreeMap , depending
on the interface type.
Two entities cannot share a reference to the same collection instance. Collection-valued properties do
not support null value semantics because Hibernate does not distinguish between a null collection
reference and an empty collection.
For collections of value types, JPA 2.0 defines the @ElementCollection annotation. The lifecycle of the value-type collection is
entirely controlled by its owning entity.
Considering the previous example mapping, when clearing the phone collection, Hibernate deletes all the associated phones.
When adding a new element to the value type collection, Hibernate issues a new insert statement.
JAVA
person.getPhones().clear();
person.getPhones().add( "123-456-7890" );
person.getPhones().add( "456-000-1234" );
SQL
DELETE FROM Person_phones WHERE Person_id = 1
If removing all elements or adding new ones is rather straightforward, removing a certain entry actually requires reconstructing
the whole collection from scratch.
JAVA
person.getPhones().remove( 0 );
SQL
DELETE FROM Person_phones WHERE Person_id = 1
Depending on the number of elements, this behavior might not be efficient, if many elements need to be deleted and reinserted
back into the database table. A workaround is to use an @OrderColumn , which, although not as efficient as when using the actual
link table primary key, might improve the efficiency of the remove operations.
JAVA
@ElementCollection
@OrderColumn(name = "order_id")
private List<String > phones = new ArrayList <>();
person.getPhones().remove( 0 );
SQL
DELETE FROM Person_phones
WHERE Person_id = 1
AND order_id = 1
UPDATE Person_phones
SET phones = '456-000-1234'
WHERE Person_id = 1
AND order_id = 0
The @OrderColumn column works best when removing from the tail of the collection, as it only requires
a single delete statement. Removing from the head or the middle of the collection requires deleting
the extra elements and updating the remaining ones to preserve element order.
Embeddable type collections behave the same way as value type collections. Adding embeddables to the collection triggers the
associated insert statements and removing elements from the collection will generate delete statements.
JAVA
@Entity(name = "Person")
public static class Person {
@Id
private Long id;
@ElementCollection
private List<Phone > phones = new ArrayList <>();
@Embeddable
public static class Phone {
@Column(name = "`number`")
private String number;
SQL
INSERT INTO Person_phones ( Person_id , number, type )
VALUES ( 1, '028-234-9876', 'landline' )
From a relational database perspective, associations are defined by the foreign key side (the child-side). With value type
collections, only the entity can control the association (the parent-side), but for a collection of entities, both sides of the association
are managed by the persistence context.
For this reason, entity collections can be devised into two main categories: unidirectional and bidirectional associations.
Unidirectional associations are very similar to value type collections since only the parent side controls this relationship.
Bidirectional associations are more tricky since, even if sides need to be in-sync at all times, only one side is responsible for
managing the association. A bidirectional association has an owning side and an inverse (mappedBy) side.
Another way of categorizing entity collections is by the underlying collection type, and so we can have:
bags
indexed lists
sets
sorted sets
maps
sorted maps
arrays
In the following sections, we will go through all these collection types and discuss both unidirectional and bidirectional
associations.
2.8.4. Bags
Bags are unordered lists, and we can have unidirectional bags or bidirectional ones.
Unidirectional bags
The unidirectional bag is mapped using a single @OneToMany annotation on the parent side of the association. Behind the scenes,
Hibernate requires an association table to manage the parent-child relationship, as we can see in the following example:
JAVA
@Entity(name = "Person")
public static class Person {
@Id
private Long id;
@Entity(name = "Phone")
public static class Phone {
@Id
private Long id;
@Column(name = "`number`")
private String number;
SQL
Because both the parent and the child sides are entities, the persistence context manages each entity
separately. Cascades can propagate an entity state transition from a parent entity to its children.
By marking the parent side with the CascadeType.ALL attribute, the unidirectional association lifecycle becomes very similar to
that of a value type collection.
JAVA
Person person = new Person ( 1L );
person.getPhones().add( new Phone ( 1L, "landline", "028-234-9876" ) );
person.getPhones().add( new Phone ( 2L, "mobile", "072-122-9876" ) );
entityManager.persist( person );
SQL
In the example above, once the parent entity is persisted, the child entities are going to be persisted as well.
Just like value type collections, unidirectional bags are not as efficient when it comes to modifying the
collection structure (removing or reshuffling elements). Because the parent-side cannot uniquely
identify each individual child, Hibernate might delete all child table rows associated with the parent
entity and re-add them according to the current collection state.
Bidirectional bags
The bidirectional bag is the most common type of entity collection. The @ManyToOne side is the owning side of the bidirectional
bag association, while the @OneToMany is the inverse side, being marked with the mappedBy attribute.
JAVA
@Entity(name = "Person")
public static class Person {
@Id
private Long id;
@Entity(name = "Phone")
public static class Phone {
@Id
private Long id;
@ManyToOne
private Person person;
@Override
public boolean equals(Object o) {
if ( this == o ) {
return true;
}
if ( o == null || getClass() != o.getClass() ) {
return false ;
}
Phone phone = (Phone ) o;
return Objects .equals( number, phone.number );
}
@Override
public int hashCode() {
return Objects .hash( number );
}
}
SQL
JAVA
person.addPhone( new Phone ( 1L, "landline", "028-234-9876" ) );
person.addPhone( new Phone ( 2L, "mobile", "072-122-9876" ) );
entityManager.flush();
person.removePhone( person.getPhones().get( 0 ) );
SQL
INSERT INTO Phone (number, person_id, type, id)
VALUES ( '028-234-9876', 1, 'landline', 1 )
UPDATE Phone
SET person_id = NULL, type = 'landline' where id = 1
JAVA
@OneToMany(mappedBy = "person", cascade = CascadeType .ALL, orphanRemoval = true)
private List<Phone > phones = new ArrayList <>();
SQL
DELETE FROM Phone WHERE id = 1
When rerunning the previous example, the child will get removed because the parent-side propagates the removal upon
dissociating the child entity reference.
@OrderBy
@OrderColumn
the collection uses a dedicated order column in the collection link table
JAVA
@Entity(name = "Person")
public static class Person {
@Id
private Long id;
@Entity(name = "Phone")
public static class Phone {
@Id
private Long id;
@Column(name = "`number`")
private String number;
The database mapping is the same as with the Unidirectional bags example, so it won’t be repeated. Upon fetching the collection,
Hibernate generates the following select statement:
SQL
SELECT
phones0_.Person_id AS Person_i1_1_0_ ,
phones0_.phones_id AS phones_i2_1_0_,
unidirecti1_.id AS id1_2_1_,
unidirecti1_."number" AS number2_2_1_,
unidirecti1_.type AS type3_2_1_
FROM
Person_Phone phones0_
INNER JOIN
Phone unidirecti1_ ON phones0_.phones_id=unidirecti1_.id
WHERE
phones0_.Person_id = 1
ORDER BY
unidirecti1_."number"
The @OrderBy annotation can take multiple entity properties, and each property can take an ordering
direction too (e.g. @OrderBy("name ASC, type DESC") ).
If no property is specified (e.g. @OrderBy ), the primary key of the child entity table is used for ordering.
JAVA
@OneToMany(cascade = CascadeType .ALL)
@OrderColumn(name = "order_id")
private List<Phone > phones = new ArrayList <>();
SQL
CREATE TABLE Person_Phone (
Person_id BIGINT NOT NULL ,
phones_id BIGINT NOT NULL ,
order_id INTEGER NOT NULL ,
PRIMARY KEY ( Person_id , order_id )
)
This time, the link table takes the order_id column and uses it to materialize the collection element order. When fetching the
list, the following select query is executed:
SQL
select
phones0_.Person_id as Person_i1_1_0_ ,
phones0_.phones_id as phones_i2_1_0_,
phones0_.order_id as order_id3_0_,
unidirecti1_.id as id1_2_1_,
unidirecti1_.number as number2_2_1_,
unidirecti1_.type as type3_2_1_
from
Person_Phone phones0_
inner join
Phone unidirecti1_
on phones0_.phones_id=unidirecti1_.id
where
phones0_.Person_id = 1
With the order_id column in place, Hibernate can order the list in-memory after it’s being fetched from the database.
JAVA
@OneToMany(mappedBy = "person", cascade = CascadeType .ALL)
@OrderBy("number")
private List<Phone > phones = new ArrayList <>();
Just like with the unidirectional @OrderBy list, the number column is used to order the statement on the SQL level.
When using the @OrderColumn annotation, the order_id column is going to be embedded in the child table:
JAVA
@OneToMany(mappedBy = "person", cascade = CascadeType .ALL)
@OrderColumn(name = "order_id")
private List<Phone > phones = new ArrayList <>();
SQL
CREATE TABLE Phone (
id BIGINT NOT NULL ,
number VARCHAR(255) ,
type VARCHAR(255) ,
person_id BIGINT ,
order_id INTEGER ,
PRIMARY KEY ( id )
)
When fetching the collection, Hibernate will use the fetched ordered columns to sort the elements according to the
@OrderColumn mapping.
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/ListIndexBase.html) annotation.
JAVA
@OneToMany(mappedBy = "person", cascade = CascadeType .ALL)
@OrderColumn(name = "order_id")
@ListIndexBase(100)
private List<Phone > phones = new ArrayList <>();
When inserting two Phone records, Hibernate is going to start the List index from 100 this time.
JAVA
Person person = new Person ( 1L );
entityManager.persist( person );
person.addPhone( new Phone ( 1L, "landline", "028-234-9876" ) );
person.addPhone( new Phone ( 2L, "mobile", "072-122-9876" ) );
SQL
INSERT INTO Phone ("number", person_id, type, id)
VALUES ('028-234-9876', 1, 'landline', 1)
UPDATE Phone
SET order_id = 100
WHERE id = 1
UPDATE Phone
SET order_id = 101
WHERE id = 2
In the following example, the @OrderBy annotation uses the CHAR_LENGTH SQL function to order the Article entities by the
number of characters of the name attribute.
JAVA
@Entity(name = "Person")
public static class Person {
@Id
private Long id;
@OneToMany(
mappedBy = "person",
cascade = CascadeType .ALL
)
@org.hibernate.annotations.OrderBy (
clause = "CHAR_LENGTH(name) DESC"
)
private List<Article > articles = new ArrayList <>();
@Entity(name = "Article")
public static class Article {
@Id
@GeneratedValue
private Long id;
When fetching the articles collection, Hibernate uses the ORDER BY SQL clause provided by the mapping:
JAVA
Person person = entityManager.find( Person .class , 1L );
assertEquals(
"High-Performance Hibernate",
person.getArticles().get( 0 ).getName()
);
SQL
select
a.person_id as person_i4_0_0_,
a.id as id1_0_0_,
a.content as content2_0_1_,
a.name as name3_0_1_,
a.person_id as person_i4_0_1_
from
Article a
where
a.person_id = ?
order by
CHAR_LENGTH(a.name) desc
2.8.6. Sets
Sets are collections that don’t allow duplicate entries and Hibernate supports both the unordered Set and the natural-ordering
SortedSet .
Unidirectional sets
The unidirectional set uses a link table to hold the parent-child associations and the entity mapping looks as follows:
JAVA
@Entity(name = "Person")
public static class Person {
@Id
private Long id;
@Entity(name = "Phone")
public static class Phone {
@Id
private Long id;
@NaturalId
@Column(name = "`number`")
private String number;
@Override
public boolean equals(Object o) {
if ( this == o ) {
return true;
}
if ( o == null || getClass() != o.getClass() ) {
return false ;
}
Phone phone = (Phone ) o;
return Objects .equals( number, phone.number );
}
@Override
public int hashCode() {
return Objects .hash( number );
}
}
The unidirectional set lifecycle is similar to that of the Unidirectional bags, so it can be omitted. The only difference is that Set
doesn’t allow duplicates, but this constraint is enforced by the Java object contract rather than the database mapping.
When using sets, it’s very important to supply proper equals/hashCode implementations for child
entities. In the absence of a custom equals/hashCode implementation logic, Hibernate will use the default Java
reference-based object equality which might render unexpected results when mixing detached and managed
object instances.
Bidirectional sets
Just like bidirectional bags, the bidirectional set doesn’t use a link table, and the child table has a foreign key referencing the
parent table primary key. The lifecycle is just like with bidirectional bags except for the duplicates which are filtered out.
JAVA
@Entity(name = "Person")
public static class Person {
@Id
private Long id;
@Entity(name = "Phone")
public static class Phone {
@Id
private Long id;
@ManyToOne
private Person person;
@Override
public boolean equals(Object o) {
if ( this == o ) {
return true;
}
if ( o == null || getClass() != o.getClass() ) {
return false ;
}
Phone phone = (Phone ) o;
return Objects .equals( number, phone.number );
}
@Override
public int hashCode() {
return Objects .hash( number );
}
}
JAVA
@Entity(name = "Person")
public static class Person {
@Id
private Long id;
@Entity(name = "Phone")
public static class Phone implements Comparable <Phone > {
@Id
private Long id;
@NaturalId
@Column(name = "`number`")
private String number;
@Override
public int compareTo(Phone o) {
return number.compareTo( o.getNumber() );
}
@Override
public boolean equals(Object o) {
if ( this == o ) {
return true;
}
if ( o == null || getClass() != o.getClass() ) {
return false ;
}
Phone phone = (Phone ) o;
return Objects .equals( number, phone.number );
}
@Override
public int hashCode() {
return Objects .hash( number );
}
}
The lifecycle and the database mapping are identical to the Unidirectional bags, so they are intentionally omitted.
@Entity(name = "Person")
public static class Person {
@Id
private Long id;
@Override
public int compare(Phone o1, Phone o2) {
return o2.compareTo( o1 );
}
}
@Entity(name = "Phone")
public static class Phone implements Comparable <Phone > {
@Id
private Long id;
@NaturalId
@Column(name = "`number`")
private String number;
@Override
public int compareTo(Phone o) {
return number.compareTo( o.getNumber() );
}
@Override
public boolean equals(Object o) {
if ( this == o ) {
return true;
}
if ( o == null || getClass() != o.getClass() ) {
return false ;
}
Phone phone = (Phone ) o;
return Objects .equals( number, phone.number );
}
@Override
public int hashCode() {
return Objects .hash( number );
}
}
JAVA
@OneToMany(mappedBy = "person", cascade = CascadeType .ALL)
@SortNatural
private SortedSet <Phone > phones = new TreeSet <>();
@SortComparator(ReverseComparator .class )
private SortedSet <Phone > phones = new TreeSet <>();
2.8.8. Maps
A java.util.Map is a ternary association because it requires a parent entity, a map key, and a value. An entity can either be a
map key or a map value, depending on the mapping. Hibernate allows using the following map keys:
MapKeyColumn
for value type maps, the map key is a column in the link table that defines the grouping logic
MapKey
the map key is either the primary key or another property of the entity stored as a map entry value
MapKeyEnumerated
MapKeyTemporal
MapKeyJoinColumn
the map key is an entity mapped as an association in the child entity that’s stored as a map entry key
JAVA
@Entity(name = "Person")
public static class Person {
@Id
private Long id;
@Temporal(TemporalType .TIMESTAMP)
@ElementCollection
@CollectionTable(name = "phone_register")
@Column(name = "since")
private Map<Phone , Date> phoneRegister = new HashMap <>();
@Embeddable
public static class Phone {
@Column(name = "`number`")
private String number;
SQL
CREATE TABLE Person (
id BIGINT NOT NULL ,
PRIMARY KEY ( id )
)
JAVA
person.getPhoneRegister().put(
new Phone ( PhoneType .LAND_LINE, "028-234-9876" ), new Date()
);
person.getPhoneRegister().put(
new Phone ( PhoneType .MOBILE, "072-122-9876" ), new Date()
);
SQL
INSERT INTO phone_register (Person_id , number, type, since)
VALUES (1, '072-122-9876', 1, '2015-12-15 17:16:45.311')
SQL
create table person (
id int8 not null,
primary key (id)
)
The call_register records the call history for every person . The call_timestamp_epoch column stores the phone call
timestamp as a Unix timestamp since the Unix epoch.
The @MapKeyColumn annotation is used to define the table column holding the key while the @Column
mapping gives the value of the java.util.Map in question.
Since we want to map all the calls by their associated java.util.Date , not by their timestamp since epoch which is a number,
the entity mapping looks as follows:
JAVA
@Entity
@Table(name = "person")
public static class Person {
@Id
private Long id;
@ElementCollection
@CollectionTable(
name = "call_register",
joinColumns = @JoinColumn(name = "person_id")
)
@MapKeyType(
@Type(
type = "org.hibernate.userguide.collections.type.TimestampEpochType"
)
)
@MapKeyColumn( name = "call_timestamp_epoch" )
@Column(name = "phone_number")
private Map<Date, Integer > callRegister = new HashMap <>();
public TimestampEpochType() {
super(
BigIntTypeDescriptor.INSTANCE,
JdbcTimestampTypeDescriptor.INSTANCE
);
}
@Override
public String getName() {
return "epoch";
}
@Override
public Date next(
Date current,
SharedSessionContractImplementor session) {
return seed( session );
}
@Override
public Date seed(
SharedSessionContractImplementor session) {
return new Timestamp( System.currentTimeMillis() );
}
@Override
public Comparator<Date> getComparator() {
return getJavaTypeDescriptor().getComparator();
}
@Override
public String objectToSQLString(
Date value,
Dialect dialect) throws Exception {
final Timestamp ts = Timestamp.class.isInstance( value )
? ( Timestamp ) value
: new Timestamp( value.getTime() );
return StringType.INSTANCE.objectToSQLString(
ts.toString(), dialect
);
}
@Override
public Date fromStringValue(
String xml) throws HibernateException {
return fromString( xml );
}
}
The TimestampEpochType allows us to map a Unix timestamp since epoch to a java.util.Date . But, without the @MapKeyType
Hibernate annotation, it would not be possible to customize the Map key type.
JAVA
String get();
}
@Embeddable
public static class MobilePhone
implements PhoneNumber {
private MobilePhone () {
}
public MobilePhone (
String countryCode,
String operatorCode,
String subscriberCode) {
this.countryCode = countryCode;
this.operatorCode = operatorCode;
this.subscriberCode = subscriberCode;
}
@Column(name = "country_code")
private String countryCode;
@Column(name = "operator_code")
private String operatorCode;
@Column(name = "subscriber_code")
private String subscriberCode;
@Override
public String get() {
return String .format(
"%s-%s-%s",
countryCode,
operatorCode,
subscriberCode
);
}
@Override
public boolean equals(Object o) {
if ( this == o ) {
return true;
}
if ( o == null || getClass() != o.getClass() ) {
return false ;
}
MobilePhone that = (MobilePhone ) o;
return Objects .equals( countryCode, that.countryCode ) &&
Objects .equals( operatorCode, that.operatorCode ) &&
Objects .equals( subscriberCode, that.subscriberCode );
}
@Override
public int hashCode() {
return Objects .hash( countryCode, operatorCode, subscriberCode );
}
}
If you want to use the PhoneNumber interface as a java.util.Map key, then you need to supply the @MapKeyClass
(http://docs.oracle.com/javaee/7/api/javax/persistence/MapKeyClass.html) annotation as well.
JAVA
@Entity
@Table(name = "person")
public static class Person {
@Id
private Long id;
@ElementCollection
@CollectionTable(
name = "call_register",
joinColumns = @JoinColumn(name = "person_id")
)
@MapKeyColumn( name = "call_timestamp_epoch" )
@MapKeyClass( MobilePhone .class )
@Column(name = "call_register")
private Map<PhoneNumber , Integer > callRegister = new HashMap <>();
SQL
create table person (
id bigint not null,
primary key (id)
)
When inserting a Person with a callRegister containing 2 MobilePhone references, Hibernate generates the following SQL
statements:
JAVA
entityManager.persist( person );
SQL
insert into person (id) values (?)
When fetching a Person and accessing the callRegister Map , Hibernate generates the following SQL statements:
JAVA
Person person = entityManager.find( Person .class , 1L );
assertEquals( 2, person.getCallRegister().size() );
assertEquals(
Integer .valueOf( 101 ),
person.getCallRegister().get( MobilePhone .fromString( "01-234-567" ) )
);
assertEquals(
Integer .valueOf( 102 ),
person.getCallRegister().get( MobilePhone .fromString( "01-234-789" ) )
);
SQL
select
cr.person_id as person_i1_0_0_,
cr.call_register as call_reg2_0_0_,
cr.country_code as country_3_0_,
cr.operator_code as operator4_0_,
cr.subscriber_code as subscrib5_0_
from
call_register cr
where
cr.person_id = ?
Unidirectional maps
A unidirectional map exposes a parent-child association from the parent-side only.
The following example shows a unidirectional map which also uses a @MapKeyTemporal annotation. The map key is a timestamp,
and it’s taken from the child entity table.
The @MapKey annotation is used to define the entity attribute used as a key of the java.util.Map in
question.
JAVA
@Entity(name = "Person")
public static class Person {
@Id
private Long id;
@Entity(name = "Phone")
public static class Phone {
@Id
@GeneratedValue
private Long id;
@Column(name = "`number`")
private String number;
SQL
Bidirectional maps
Like most bidirectional associations, this relationship is owned by the child-side while the parent is the inverse side and can
propagate its own state transitions to the child entities.
In the following example, you can see that @MapKeyEnumerated was used so that the Phone enumeration becomes the map key.
JAVA
@Entity(name = "Person")
public static class Person {
@Id
private Long id;
@Entity(name = "Phone")
public static class Phone {
@Id
@GeneratedValue
private Long id;
@Column(name = "`number`")
private String number;
@ManyToOne
private Person person;
SQL
CREATE TABLE Person (
id BIGINT NOT NULL ,
PRIMARY KEY ( id )
)
2.8.9. Arrays
When discussing arrays, it is important to understand the distinction between SQL array types and Java arrays that are mapped
as part of the application’s domain model.
Not all databases implement the SQL-99 ARRAY type and, for this reason, Hibernate doesn’t support native database array types.
Hibernate does support the mapping of arrays in the Java domain model - conceptually the same as mapping a List. However, it is
important to realize that it is impossible for Hibernate to offer lazy-loading for arrays of entities and, for this reason, it is strongly
recommended to map a "collection" of entities using a List rather than an array.
JAVA
@Entity(name = "Person")
public static class Person {
@Id
private Long id;
SQL
CREATE TABLE Person (
id BIGINT NOT NULL ,
phones VARBINARY(255) ,
PRIMARY KEY ( id )
)
If you want to map arrays such as String[] or int[] to database-specific array types like
PostgreSQL integer[] or text[] , you need to write a custom Hibernate Type.
This is sometimes beneficial. Consider a use-case such as a VARCHAR column that represents a delimited list/set of Strings.
JAVA
@Entity(name = "Person")
public static class Person {
@Id
private Long id;
@Type(type = "comma_delimited_strings")
private List<String > phones = new ArrayList <>();
public CommaDelimitedStringsJavaTypeDescriptor () {
super (
List.class ,
new MutableMutabilityPlan <List>() {
@Override
protected List deepCopyNotNull(List value) {
return new ArrayList ( value );
}
}
);
}
@Override
public String toString(List value) {
return ( (List<String >) value ).stream().collect( Collectors .joining( DELIMITER ) );
}
@Override
public List fromString(String string) {
List<String > values = new ArrayList <>();
Collections .addAll( values, string.split( DELIMITER ) );
return values;
}
@Override
public <X> X unwrap(List value, Class <X> type, WrapperOptions options) {
return (X) toString( value );
}
@Override
public <X> List wrap(X value, WrapperOptions options) {
return fromString( (String ) value );
}
}
public CommaDelimitedStringsType () {
super (
VarcharTypeDescriptor .INSTANCE,
new CommaDelimitedStringsJavaTypeDescriptor ()
);
}
@Override
public String getName() {
return "comma_delimited_strings";
}
The developer can use the comma-delimited collection like any other collection we’ve discussed so far and Hibernate will take
care of the type transformation part. The collection itself behaves like any other basic value type, as its lifecycle is bound to its
owner entity.
JAVA
person.phones.add( "027-123-4567" );
person.phones.add( "028-234-9876" );
session.flush();
person.getPhones().remove( 0 );
SQL
INSERT INTO Person ( phones, id )
VALUES ( '027-123-4567,028-234-9876', 1 )
UPDATE Person
SET phones = '028-234-9876'
WHERE id = 1
See the Hibernate Integrations Guide for more details on developing custom value type mappings.
JAVA
@Entity(name = "Person")
public static class Person {
@Id
private Long id;
@Entity(name = "Phone")
public static class Phone implements Comparable <Phone > {
@Id
private Long id;
@NaturalId
@Column(name = "`number`")
private String number;
@Override
public int compareTo(Phone o) {
return number.compareTo( o.getNumber() );
}
@Override
public boolean equals(Object o) {
if ( this == o ) {
return true;
}
if ( o == null || getClass() != o.getClass() ) {
return false ;
}
Phone phone = (Phone ) o;
return Objects .equals( number, phone.number );
}
@Override
public int hashCode() {
return Objects .hash( number );
}
}
@Override
public PersistentCollection instantiate(
SharedSessionContractImplementor session,
CollectionPersister persister) throws HibernateException {
return new PersistentQueue ( session );
}
@Override
public PersistentCollection wrap(
SharedSessionContractImplementor session,
Object collection) {
return new PersistentQueue ( session, (List) collection );
}
@Override
public Iterator getElementsIterator(Object collection) {
return ( (Queue ) collection ).iterator();
}
@Override
public boolean contains(Object collection, Object entity) {
return ( (Queue ) collection ).contains( entity );
}
@Override
public Object indexOf(Object collection, Object entity) {
int i = ( (List) collection ).indexOf( entity );
return ( i < 0 ) ? null : i;
}
@Override
public Object replaceElements(
Object original,
Object target,
CollectionPersister persister,
Object owner,
Map copyCache,
SharedSessionContractImplementor session)
throws HibernateException {
Queue result = (Queue ) target;
result.clear();
result.addAll( (Queue ) original );
return result;
}
@Override
public Object instantiate(int anticipatedSize) {
return new LinkedList <>();
}
@Override
public boolean offer(Object o) {
return add(o);
}
@Override
public Object remove() {
return poll();
}
@Override
public Object poll() {
int size = size();
if(size > 0) {
Object first = get(0);
remove( 0 );
return first;
}
throw new NoSuchElementException ();
}
@Override
public Object element() {
return peek();
}
@Override
public Object peek() {
return size() > 0 ? get( 0 ) : null;
}
}
The reason why the Queue interface is not used for the entity attribute is because Hibernate only
allows the following types:
java.util.List
java.util.Set
java.util.Map
java.util.SortedSet
java.util.SortedMap
However, the custom collection type can still be customized as long as the base type is one of the
aforementioned persistent types.
JAVA
Person person = entityManager.find( Person .class , 1L );
Queue <Phone > phones = person.getPhones();
Phone head = phones.peek();
assertSame(head, phones.poll());
assertEquals( 1, phones.size() );
JAVA
@Entity(name = "Book")
public static class Book {
@Id
private Long id;
@NaturalId
private String isbn;
JAVA
@Entity(name = "Book")
public static class Book {
@Id
private Long id;
@NaturalId
@Embedded
private Isbn isbn;
@Embeddable
public static class Isbn implements Serializable {
@Override
public boolean equals(Object o) {
if ( this == o ) {
return true;
}
if ( o == null || getClass() != o.getClass() ) {
return false ;
}
Isbn isbn = (Isbn) o;
return Objects .equals( isbn10, isbn.isbn10 ) &&
Objects .equals( isbn13, isbn.isbn13 );
}
@Override
public int hashCode() {
return Objects .hash( isbn10, isbn13 );
}
}
JAVA
@Entity(name = "Book")
public static class Book {
@Id
private Long id;
@NaturalId
private String productNumber;
@NaturalId
@ManyToOne(fetch = FetchType .LAZY)
private Publisher publisher;
@Entity(name = "Publisher")
public static class Publisher implements Serializable {
@Id
private Long id;
@Override
public boolean equals(Object o) {
if ( this == o ) {
return true;
}
if ( o == null || getClass() != o.getClass() ) {
return false ;
}
Publisher publisher = (Publisher ) o;
return Objects .equals( id, publisher.id ) &&
Objects .equals( name, publisher.name );
}
@Override
public int hashCode() {
return Objects .hash( id, name );
}
}
If the entity does not define a natural id, trying to load an entity by its natural id will throw an
exception.
JAVA
Book book = entityManager
.unwrap(Session .class )
.byNaturalId( Book.class )
.using( "isbn", "978-9730228236" )
.load();
JAVA
Book book = entityManager
.unwrap(Session .class )
.byNaturalId( Book.class )
.using(
"isbn",
new Isbn(
"973022823X",
"978-9730228236"
) )
.load();
JAVA
Book book = entityManager
.unwrap(Session .class )
.byNaturalId( Book.class )
.using("productNumber", "973022823X")
.using("publisher", publisher)
.load();
load()
obtains a reference to the entity, making sure that the entity state is initialized
getReference()
obtains a reference to the entity. The state may or may not be initialized. If the entity is already associated with the current
running Session, that reference (loaded or not) is returned. If the entity is not loaded in the current Session and the entity
supports proxy generation, an uninitialized proxy is generated and returned, otherwise the entity is loaded from the database
and returned.
NaturalIdLoadAccess allows loading an entity by natural id and at the same time apply a pessimistic lock. For additional details
on locking, see the Locking chapter.
We will discuss the last method available on NaturalIdLoadAccess ( setSynchronizationEnabled() ) in Natural Id - Mutability
and Caching.
Because the Company and PostalCarrier entities define "simple" natural ids, we can load them as follows:
JAVA
Book book = entityManager
.unwrap(Session .class )
.bySimpleNaturalId( Book.class )
.load( "978-9730228236" );
JAVA
Book book = entityManager
.unwrap(Session .class )
.bySimpleNaturalId( Book.class )
.load(
new Isbn(
"973022823X",
"978-9730228236"
)
);
Here we see the use of the org.hibernate.SimpleNaturalIdLoadAccess contract, obtained via `Session#bySimpleNaturalId().
SimpleNaturalIdLoadAccess is similar to NaturalIdLoadAccess except that it does not define the using method. Instead,
because these simple natural ids are defined based on just one attribute we can directly pass the corresponding natural id
attribute value directly to the load() and getReference() methods.
If the entity does not define a natural id, or if the natural id is not of a "simple" type, an exception will
be thrown there.
If the value(s) of the natural id attribute(s) change, @NaturalId(mutable=true) should be used instead.
JAVA
@Entity(name = "Author")
public static class Author {
@Id
private Long id;
@NaturalId(mutable = true)
private String email;
Within the Session, Hibernate maintains a mapping from natural id values to entity identifiers (PK) values. If natural ids values
changed, it is possible for this mapping to become out of date until a flush occurs.
To work around this condition, Hibernate will attempt to discover any such pending changes and adjust them when the load()
or getReference() methods are executed. To be clear: this is only pertinent for mutable natural ids.
This discovery and adjustment have a performance impact. If an application is certain that none of its
mutable natural ids already associated with the Session have changed, it can disable that checking by
calling setSynchronizationEnabled(false) (the default is true). This will force Hibernate to circumvent
the checking of mutable natural ids.
JAVA
Author author = entityManager
.unwrap(Session .class )
.bySimpleNaturalId( Author .class )
.load( "john@acme.com" );
author.setEmail( "john.doe@acme.com" );
assertNull(
entityManager
.unwrap(Session .class )
.bySimpleNaturalId( Author .class )
.setSynchronizationEnabled( false )
.load( "john.doe@acme.com" )
);
assertSame( author,
entityManager
.unwrap(Session .class )
.bySimpleNaturalId( Author .class )
.setSynchronizationEnabled( true )
.load( "john.doe@acme.com" )
);
Not only can this NaturalId-to-PK resolution be cached in the Session, but we can also have it cached in the second-level cache if
second level caching is enabled.
JAVA
@Entity(name = "Book")
@NaturalIdCache
public static class Book {
@Id
private Long id;
@NaturalId
private String isbn;
JPA only acknowledges the entity model mapping so, if you are concerned about JPA provider
portability, it’s best to stick to the strict POJO model. On the other hand, Hibernate can work with both
A given entity has just one entity mode within a given SessionFactory. This is a change from previous versions which allowed to
define multiple entity modes for an entity and to select which to load. Entity modes can now be mixed within a domain model; a
dynamic entity might reference a POJO entity and vice versa.
XML
<!DOCTYPE hibernate-mapping PUBLIC
"-//Hibernate/Hibernate Mapping DTD 3.0//EN"
"http://www.hibernate.org/dtd/hibernate-mapping-3.0.dtd">
<hibernate-mapping>
<class entity-name="Book">
<id name="isbn" column="isbn" length="32" type="string"/>
</class>
</hibernate-mapping>
After you defined your entity mapping, you need to instruct Hibernate to use the dynamic mapping mode:
JAVA
settings.put( "hibernate.default_entity_mode", "dynamic-map" );
When you are going to save the following Book dynamic entity, Hibernate is going to generate the following SQL statement:
JAVA
Map<String , String > book = new HashMap <>();
book.put( "isbn", "978-9730228236" );
book.put( "title", "High-Performance Java Persistence" );
book.put( "author", "Vlad Mihalcea" );
entityManager
.unwrap(Session .class )
.save( "Book", book );
SQL
insert
into
Book
(title, author, isbn)
values
(?, ?, ?)
The main advantage of dynamic models is the quick turnaround time for prototyping without the need
for entity class implementation. The main downfall is that you lose compile-time type checking and will
likely deal with many exceptions at runtime. However, as a result of the Hibernate mapping, the
database schema can easily be normalized and sound, allowing to add a proper domain model implementation
on top later on.
It is also interesting to note that dynamic models are great for certain integration use cases as well. Envers, for
example, makes extensive use of dynamic models to represent the historical data.
2.11. Inheritance
Although relational database systems don’t provide support for inheritance, Hibernate provides several strategies to leverage this
object-oriented trait onto domain model entities:
MappedSuperclass
Inheritance is implemented in the domain model only without reflecting it in the database schema. See MappedSuperclass.
Single table
The domain model class hierarchy is materialized into a single table which contains entities belonging to different class types.
See Single table.
Joined table
The base class and all the subclasses have their own database tables and fetching a subclass entity requires a join with the
parent table as well. See Joined table.
2.11.1. MappedSuperclass
In the following domain model class hierarchy, a DebitAccount and a CreditAccount share the same Account base class.
Account
balance BigDecimal
id Long
owner String
interestRate BigDecimal
CreditAccount DebitAccount
creditLimit BigDecimal overdraftFee BigDecimal
When using MappedSuperclass , the inheritance is visible in the domain model only, and each database table contains both the
base class and the subclass properties.
JAVA
@MappedSuperclass
public static class Account {
@Id
private Long id;
@Entity(name = "DebitAccount")
public static class DebitAccount extends Account {
@Entity(name = "CreditAccount")
public static class CreditAccount extends Account {
SQL
Because the @MappedSuperclass inheritance model is not mirrored at the database level, it’s not
possible to use polymorphic queries (fetching subclasses by their base class).
When omitting an explicit inheritance strategy (e.g. @Inheritance ), JPA will choose the SINGLE_TABLE
strategy by default.
JAVA
@Entity(name = "Account")
@Inheritance(strategy = InheritanceType .SINGLE_TABLE)
public static class Account {
@Id
private Long id;
@Entity(name = "DebitAccount")
public static class DebitAccount extends Account {
@Entity(name = "CreditAccount")
public static class CreditAccount extends Account {
SQL
CREATE TABLE Account (
DTYPE VARCHAR(31) NOT NULL ,
id BIGINT NOT NULL ,
balance NUMERIC(19, 2) ,
interestRate NUMERIC(19, 2) ,
owner VARCHAR(255) ,
overdraftFee NUMERIC(19, 2) ,
creditLimit NUMERIC(19, 2) ,
PRIMARY KEY ( id )
)
Each subclass in a hierarchy must define a unique discriminator value, which is used to differentiate between rows belonging to
separate subclass types. If this is not specified, the DTYPE column is used as a discriminator, storing the associated subclass name.
JAVA
entityManager.persist( debitAccount );
entityManager.persist( creditAccount );
SQL
INSERT INTO Account (balance, interestRate, owner, overdraftFee, DTYPE, id)
VALUES (100, 1.5, 'John Doe', 25, 'DebitAccount', 1)
When using polymorphic queries, only a single table is required to be scanned to fetch all associated subclass instances.
JAVA
List<Account > accounts = entityManager
.createQuery( "select a from Account a" )
.getResultList();
SQL
SELECT singletabl0_.id AS id2_0_ ,
singletabl0_.balance AS balance3_0_ ,
singletabl0_.interestRate AS interest4_0_ ,
singletabl0_.owner AS owner5_0_ ,
singletabl0_.overdraftFee AS overdraf6_0_ ,
singletabl0_.creditLimit AS creditLi7_0_ ,
singletabl0_.DTYPE AS DTYPE1_0_
FROM Account singletabl0_
Among all other inheritance alternatives, the single table strategy performs the best since it requires
access to one table only. Because all subclass columns are stored in a single table, it’s not possible to
use NOT NULL constraints anymore, so integrity checks must be moved either into the data access
layer or enforced through CHECK or TRIGGER constraints.
Discriminator
The discriminator column contains marker values that tell the persistence layer what subclass to instantiate for a particular row.
Hibernate Core supports the following restricted set of types as discriminator column: String , char , int , byte , short ,
Use the @DiscriminatorColumn to define the discriminator column as well as the discriminator type.
The force attribute is useful if the table contains rows with extra discriminator values that are not mapped to a
persistent class. This could, for example, occur when working with a legacy database. If force is set to true
Hibernate will specify the allowed discriminator values in the SELECT query, even when retrieving all instances of
the root class.
The second option, insert , tells Hibernate whether or not to include the discriminator column in SQL INSERTs.
Usually, the column should be part of the INSERT statement, but if your discriminator column is also part of a
mapped composite identifier you have to set this option to false.
Discriminator formula
Assuming a legacy database schema where the discriminator is based on inspecting a certain column, we can take advantage of
the Hibernate specific @DiscriminatorFormula annotation and map the inheritance model as follows:
JAVA
@Entity(name = "Account")
@Inheritance(strategy = InheritanceType .SINGLE_TABLE)
@DiscriminatorFormula(
"case when debitKey is not null " +
"then 'Debit' " +
"else ( " +
" case when creditKey is not null " +
" then 'Credit' " +
" else 'Unknown' " +
" end ) " +
"end "
)
public static class Account {
@Id
private Long id;
@Entity(name = "DebitAccount")
@DiscriminatorValue(value = "Debit")
public static class DebitAccount extends Account {
@Entity(name = "CreditAccount")
@DiscriminatorValue(value = "Credit")
public static class CreditAccount extends Account {
SQL
CREATE TABLE Account (
id int8 NOT NULL ,
balance NUMERIC(19, 2) ,
interestRate NUMERIC(19, 2) ,
owner VARCHAR(255) ,
debitKey VARCHAR(255) ,
overdraftFee NUMERIC(19, 2) ,
creditKey VARCHAR(255) ,
creditLimit NUMERIC(19, 2) ,
PRIMARY KEY ( id )
)
The @DiscriminatorFormula defines a custom SQL clause that can be used to identify a certain subclass type. The
@DiscriminatorValue defines the mapping between the result of the @DiscriminatorFormula and the inheritance subclass
type.
null
If the underlying discriminator column is null, the null discriminator mapping is going to be used.
not null
If the underlying discriminator column has a not-null value that is not explicitly mapped to any entity, the not-null
discriminator mapping used.
To understand how these two values work, consider the following entity mapping:
JAVA
@Entity(name = "Account")
@Inheritance(strategy = InheritanceType .SINGLE_TABLE)
@DiscriminatorValue( "null" )
public static class Account {
@Id
private Long id;
@Entity(name = "DebitAccount")
@DiscriminatorValue( "Debit" )
public static class DebitAccount extends Account {
@Entity(name = "CreditAccount")
@DiscriminatorValue( "Credit" )
public static class CreditAccount extends Account {
@Entity(name = "OtherAccount")
@DiscriminatorValue( "not null" )
public static class OtherAccount extends Account {
The Account class has a @DiscriminatorValue( "null" ) mapping, meaning that any account row which does not contain
any discriminator value will be mapped to an Account base class entity. The DebitAccount and CreditAccount entities use
explicit discriminator values. The OtherAccount entity is used as a generic account type because it maps any database row
whose discriminator column is not explicitly assigned to any other entity in the current inheritance tree.
JAVA
entityManager.persist( debitAccount );
entityManager.persist( creditAccount );
entityManager.persist( account );
assertEquals(4, accounts.size());
assertEquals( DebitAccount .class , accounts.get( 1L ).getClass() );
assertEquals( CreditAccount .class , accounts.get( 2L ).getClass() );
assertEquals( Account .class , accounts.get( 3L ).getClass() );
assertEquals( OtherAccount .class , accounts.get( 4L ).getClass() );
SQL
As you can see, the Account entity row has a value of NULL in the DTYPE discriminator column, while the OtherAccount entity
was saved with a DTYPE column value of other which has not explicit mapping.
A discriminator column is not required for this mapping strategy. Each subclass must, however, declare a table column holding
the object identifier.
JAVA
@Entity(name = "Account")
@Inheritance(strategy = InheritanceType .JOINED)
public static class Account {
@Id
private Long id;
@Entity(name = "DebitAccount")
public static class DebitAccount extends Account {
@Entity(name = "CreditAccount")
public static class CreditAccount extends Account {
SQL
CREATE TABLE Account (
id BIGINT NOT NULL ,
balance NUMERIC(19, 2) ,
interestRate NUMERIC(19, 2) ,
owner VARCHAR(255) ,
PRIMARY KEY ( id )
)
The primary key of this table is also a foreign key to the superclass table and described by the
@PrimaryKeyJoinColumns .
The table name still defaults to the non-qualified class name. Also, if @PrimaryKeyJoinColumn is not
set, the primary key / foreign key columns are assumed to have the same names as the primary key columns of
the primary table of the superclass.
JAVA
@Entity(name = "Account")
@Inheritance(strategy = InheritanceType .JOINED)
public static class Account {
@Id
private Long id;
@Entity(name = "DebitAccount")
@PrimaryKeyJoinColumn(name = "account_id")
public static class DebitAccount extends Account {
@Entity(name = "CreditAccount")
@PrimaryKeyJoinColumn(name = "account_id")
public static class CreditAccount extends Account {
SQL
When using polymorphic queries, the base class table must be joined with all subclass tables to fetch every associated subclass
instance.
JAVA
List<Account > accounts = entityManager
.createQuery( "select a from Account a" )
.getResultList();
SQL
SELECT jointablet0_.id AS id1_0_ ,
jointablet0_.balance AS balance2_0_ ,
jointablet0_.interestRate AS interest3_0_ ,
jointablet0_.owner AS owner4_0_ ,
jointablet0_1_.overdraftFee AS overdraf1_2_ ,
jointablet0_2_.creditLimit AS creditLi1_1_ ,
CASE WHEN jointablet0_1_.id IS NOT NULL THEN 1
WHEN jointablet0_2_.id IS NOT NULL THEN 2
WHEN jointablet0_.id IS NOT NULL THEN 0
END AS clazz_
FROM Account jointablet0_
LEFT OUTER JOIN DebitAccount jointablet0_1_ ON jointablet0_.id = jointablet0_1_.id
LEFT OUTER JOIN CreditAccount jointablet0_2_ ON jointablet0_.id = jointablet0_2_.id
The joined table inheritance polymorphic queries can use several JOINS which might affect
performance when fetching a large number of entities.
In Hibernate, it is not necessary to explicitly map such inheritance hierarchies. You can map each class as a separate entity root.
However, if you wish use polymorphic associations (e.g. an association to the superclass of your hierarchy), you need to use the
union subclass mapping.
JAVA
@Entity(name = "Account")
@Inheritance(strategy = InheritanceType .TABLE_PER_CLASS)
public static class Account {
@Id
private Long id;
@Entity(name = "DebitAccount")
public static class DebitAccount extends Account {
@Entity(name = "CreditAccount")
public static class CreditAccount extends Account {
SQL
When using polymorphic queries, a UNION is required to fetch the base class table along with all subclass tables as well.
JAVA
List<Account > accounts = entityManager
.createQuery( "select a from Account a" )
.getResultList();
SQL
Polymorphic queries require multiple UNION queries, so be aware of the performance implications of a
large class hierarchy.
However, you can even query interfaces or base classes that don’t belong to the JPA entity inheritance model.
JAVA
ID getId();
Integer getVersion();
}
If we have two entity mappings, a Book and a Blog , and the Book entity is mapped with the @Polymorphism
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/Polymorphism.html) annotation and taking the
PolymorphismType.EXPLICIT setting:
JAVA
@Entity(name = "Event")
public static class Book implements DomainModelEntity <Long> {
@Id
private Long id;
@Version
private Integer version;
@Entity(name = "Blog")
@Polymorphism(type = PolymorphismType .EXPLICIT)
public static class Blog implements DomainModelEntity <Long> {
@Id
private Long id;
@Version
private Integer version;
JAVA
We can now query against the DomainModelEntity interface, and Hibernate is going to fetch only the entities that are either
mapped with @Polymorphism(type = PolymorphismType.IMPLICIT) or they are not annotated at all with the @Polymorphism
annotation (implying the IMPLICIT behavior):
Example 269. Fetching Domain Model entities using non-mapped base class polymorphism
JAVA
List<DomainModelEntity > accounts = entityManager
.createQuery(
"select e " +
"from org.hibernate.userguide.inheritance.polymorphism.DomainModelEntity e" )
.getResultList();
assertEquals(1, accounts.size());
assertTrue( accounts.get( 0 ) instanceof Book );
Therefore, only the Book was fetched since the Blog entity was marked with the @Polymorphism(type =
PolymorphismType.EXPLICIT) annotation, which instructs Hibernate to skip it when executing a polymorphic query against a
non-mapped base class.
2.12. Immutability
Immutability can be specified for both entities and collections.
JAVA
@Entity(name = "Event")
@Immutable
public static class Event {
@Id
private Long id;
reducing memory footprint since there is no need to retain the dehydrated state for the dirty checking mechanism
speeding-up the Persistence Context flushing phase since immutable entities can skip the dirty checking process
JAVA
doInJPA( this::entityManagerFactory, entityManager -> {
Event event = new Event ();
event .setId( 1L );
event .setCreatedOn( new Date( ) );
event .setMessage( "Hibernate User Guide rocks!" );
entityManager.persist( event );
} );
When loading the entity and trying to change its state, Hibernate will skip any modification, therefore no SQL UPDATE statement
is executed.
JAVA
doInJPA( this::entityManagerFactory, entityManager -> {
Event event = entityManager.find( Event .class , 1L );
log.info( "Change event message" );
event .setMessage( "Hibernate User Guide" );
} );
doInJPA( this::entityManagerFactory, entityManager -> {
Event event = entityManager.find( Event .class , 1L );
assertEquals("Hibernate User Guide rocks!", event .getMessage());
} );
SQL
SELECT e.id AS id1_0_0_,
e.createdOn AS createdO2_0_0_,
e.message AS message3_0_0_
FROM event e
WHERE e.id = 1
JAVA
@Entity(name = "Batch")
public static class Batch {
@Id
private Long id;
@Entity(name = "Event")
@Immutable
public static class Event {
@Id
private Long id;
This time, not only the Event entity is immutable, but the Event collection stored by the Batch parent entity. Once the
immutable collection is created, it can never be modified.
JAVA
doInJPA( this::entityManagerFactory, entityManager -> {
Batch batch = new Batch ();
batch.setId( 1L );
batch.setName( "Change request" );
batch.getEvents().add( event1 );
batch.getEvents().add( event2 );
entityManager.persist( batch );
} );
JAVA
doInJPA( this::entityManagerFactory, entityManager -> {
Batch batch = entityManager.find( Batch .class , 1L );
log.info( "Change batch name" );
batch.setName( "Proposed change request" );
} );
SQL
SELECT b.id AS id1_0_0_,
b.name AS name2_0_0_
FROM Batch b
WHERE b.id = 1
UPDATE batch
SET name = 'Proposed change request'
WHERE id = 1
JAVA
try {
doInJPA( this::entityManagerFactory, entityManager -> {
Batch batch = entityManager.find( Batch .class , 1L );
batch.getEvents().clear();
} );
}
catch ( Exception e ) {
log.error( "Immutable collections cannot be modified" );
}
BASH
javax.persistence.RollbackException : Error while committing the transaction
While immutable entity changes are simply discarded, modifying an immutable collection end up in a
HibernateException being thrown.
3. Bootstrap
org.hibernate.boot.spi.metadatabuildercontributor;
The term bootstrapping refers to initializing and starting a software component. In Hibernate, we are specifically talking about
the process of building a fully functional SessionFactory instance or EntityManagerFactory instance, for JPA. The process is
very different for each.
This chapter will not focus on all the possibilities of bootstrapping. Those will be covered in each
specific more-relevant chapters later on. Instead, we focus here on the API calls needed to perform
the bootstrapping.
During the bootstrap process, you might want to customize Hibernate behavior so make sure you
check the Configurations section as well.
org.hibernate.boot.registry.classloading.spi.ClassLoaderService
org.hibernate.integrator.spi.IntegratorService
org.hibernate.boot.registry.selector.spi.StrategySelector
which control how Hibernate resolves implementations of various strategy contracts. This is a very powerful service, but a full
discussion of it is beyond the scope of this guide.
If you are ok with the default behavior of Hibernate in regards to these BootstrapServiceRegistry services
(which is quite often the case, especially in stand-alone environments), then building the
BootstrapServiceRegistry can be skipped.
If you wish to alter how the BootstrapServiceRegistry is built, that is controlled through the
org.hibernate.boot.registry.BootstrapServiceRegistryBuilder:
JAVA
BootstrapServiceRegistryBuilder bootstrapRegistryBuilder =
new BootstrapServiceRegistryBuilder ();
// add a custom ClassLoader
bootstrapRegistryBuilder.applyClassLoader( customClassLoader );
// manually add an Integrator
bootstrapRegistryBuilder.applyIntegrator( customIntegrator );
The services of the BootstrapServiceRegistry cannot be extended (added to) nor overridden
(replaced).
The second ServiceRegistry is the org.hibernate.boot.registry.StandardServiceRegistry . You will almost always need to
configure the StandardServiceRegistry , which is done through
org.hibernate.boot.registry.StandardServiceRegistryBuilder :
JAVA
// An example using an implicitly built BootstrapServiceRegistry
StandardServiceRegistryBuilder standardRegistryBuilder =
new StandardServiceRegistryBuilder ();
StandardServiceRegistryBuilder standardRegistryBuilder =
new StandardServiceRegistryBuilder ( bootstrapRegistry );
A StandardServiceRegistry is also highly configurable via the StandardServiceRegistryBuilder API. See the
StandardServiceRegistryBuilder Javadocs
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/boot/registry/StandardServiceRegistryBuilder.html) for more details.
ServiceRegistry standardRegistry =
new StandardServiceRegistryBuilder ().build();
// Adds the named JPA orm.xml resource as a source: which performs the
// classpath lookup and parses the XML
sources.addResource( "org/hibernate/example/Product.orm.xml" );
// Read a mapping as an application resource using the convention that a class named foo.bar.MyEntity is
// mapped by a file named foo/bar/MyEntity.hbm.xml which can be resolved as a classpath resource.
sources.addClass( MyEntity .class );
@Override
public void integrate(
Metadata metadata,
SessionFactoryImplementor sessionFactory,
SessionFactoryServiceRegistry serviceRegistry) {
// 2) This form adds the specified listener(s) to the beginning of the listener chain
eventListenerRegistry.prependListeners( EventType .PERSIST,
DefaultPersistEventListener .class );
// 3) This form adds the specified listener(s) to the end of the listener chain
eventListenerRegistry.appendListeners( EventType .MERGE,
DefaultMergeEventListener .class );
}
@Override
public void disintegrate(
SessionFactoryImplementor sessionFactory,
SessionFactoryServiceRegistry serviceRegistry) {
}
}
MetadataSources has many other methods as well, explore its API and Javadocs
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/boot/MetadataSources.html) for more information. Also, all methods on
MetadataSources offer fluent-style call chaining::
ServiceRegistry standardRegistry =
new StandardServiceRegistryBuilder ().build();
Once we have the sources of mapping information defined, we need to build the Metadata object. If you are ok with the default
behavior in building the Metadata then you can simply call the buildMetadata
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/boot/MetadataSources.html#buildMetadata--) method of the
MetadataSources .
Notice that a ServiceRegistry can be passed at a number of points in this bootstrapping process.
The suggested approach is to build a StandardServiceRegistry yourself and pass that along to the
MetadataSources constructor. From there, MetadataBuilder , Metadata , SessionFactoryBuilder , and
SessionFactory will all pick up that same StandardServiceRegistry .
However, if you wish to adjust the process of building Metadata from MetadataSources , you will need to use the
MetadataBuilder as obtained via MetadataSources#getMetadataBuilder . MetadataBuilder allows a lot of control over the
Metadata building process. See its Javadocs (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/boot/MetadataBuilder.html) for
full details.
JAVA
ServiceRegistry standardRegistry =
new StandardServiceRegistryBuilder ().build();
// specify the schema name to use for tables, etc when none is explicitly specified
metadataBuilder.applyImplicitSchemaName( "my_default_schema" );
However, if you would like to adjust that building process, you will need to use SessionFactoryBuilder as obtained via
[ Metadata#getSessionFactoryBuilder . Again, see its Javadocs
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/boot/Metadata.html#getSessionFactoryBuilder--) for more details.
JAVA
StandardServiceRegistry standardRegistry = new StandardServiceRegistryBuilder ()
.configure( "org/hibernate/example/hibernate.cfg.xml" )
.build();
The bootstrapping API is quite flexible, but in most cases it makes the most sense to think of it as a 3 step process:
JAVA
StandardServiceRegistry standardRegistry = new StandardServiceRegistryBuilder ()
.configure( "org/hibernate/example/hibernate.cfg.xml" )
.build();
It uses the terms EE and SE for these two approaches, but those terms are very misleading in this context. What the JPA spec calls
EE bootstrapping implies the existence of a container (EE, OSGi, etc), who’ll manage and inject the persistence context on behalf of
the application. What it calls SE bootstrapping is everything else. We will use the terms container-bootstrapping and application-
bootstrapping in this guide.
For compliant container-bootstrapping, the container will build an EntityManagerFactory for each persistent-unit defined in
the META-INF/persistence.xml configuration file and make that available to the application for injection via the
javax.persistence.PersistenceUnit annotation or via JNDI lookup.
JAVA
@PersistenceUnit
private EntityManagerFactory emf;
Or, in case you have multiple Persistence Units (e.g. multiple persistence.xml configuration files), you can inject a specific
EntityManagerFactory by Unit name:
JAVA
@PersistenceUnit(
unitName = "CRM"
)
private EntityManagerFactory entityManagerFactory;
<persistence xmlns="http://xmlns.jcp.org/xml/ns/persistence"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence
http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd"
version="2.1">
<persistence-unit name="CRM">
<description>
Persistence unit for Hibernate User Guide
</description>
<provider> org.hibernate.jpa.HibernatePersistenceProvider</provider>
<class> org.hibernate.documentation.userguide.Document</class>
<properties>
<property name="javax.persistence.jdbc.driver"
value="org.h2.Driver" />
<property name="javax.persistence.jdbc.url"
value="jdbc:h2:mem:db1;DB_CLOSE_DELAY=-1;MVCC=TRUE" />
<property name="javax.persistence.jdbc.user"
value="sa" />
<property name="javax.persistence.jdbc.password"
value="" />
<property name="hibernate.show_sql"
value="true" />
<property name="hibernate.hbm2ddl.auto"
value="update" />
</properties>
</persistence-unit>
</persistence>
For compliant application-bootstrapping, rather than the container building the EntityManagerFactory for the application, the
application builds the EntityManagerFactory itself using the javax.persistence.Persistence bootstrap class. The
application creates an EntityManagerFactory by calling the createEntityManagerFactory method:
JAVA
// Create an EMF for our CRM persistence-unit.
EntityManagerFactory emf = Persistence .createEntityManagerFactory( "CRM" );
If you don’t want to provide a persistence.xml configuration file, JPA allows you to provide all the
configuration options in a PersistenceUnitInfo
(http://docs.oracle.com/javaee/7/api/javax/persistence/spi/PersistenceUnitInfo.html) implementation and call
HibernatePersistenceProvider.html#createContainerEntityManagerFactory
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate
/jpa/HibernatePersistenceProvider.html#createC ontainerEntityManagerFactory-javax.persistence.spi.PersistenceUnitInfo-
java.util.Map-)
.
To inject the default Persistence Context, you can use the @PersistenceContext
(http://docs.oracle.com/javaee/7/api/javax/persistence/PersistenceContext.html) annotation.
JAVA
@PersistenceContext
private EntityManager em;
JAVA
@PersistenceContext(
unitName = "CRM",
properties = {
@PersistenceProperty(
name="org.hibernate.flushMode",
value= "MANUAL"
)
}
)
private EntityManager entityManager;
If you would like additional details on accessing and using EntityManager instances, sections 7.6 and
7.7 of the JPA 2.1 specification cover container-managed and application-managed EntityManagers ,
respectively.
annotations
XML mappings
Although annotations are much more common, there are projects where XML mappings are preferred. You can even mix
annotations and XML mappings so that you can override annotation mappings with XML configurations that can be easily
changed without recompiling the project source code. This is possible because if there are two conflicting mappings, the XML
mappings take precedence over its annotation counterpart.
The JPA specification requires the XML mappings to be located on the classpath:
“ An object/relational mapping XML file named orm.xml may be specified in the META-INF directory in
the root of the persistence unit or in the META-INF directory of any jar file referenced by the
persistence.xml .
Alternatively, or in addition, one or more mapping files may be referenced by the mapping-file elements of
the persistence-unit element. These mapping files may be present anywhere on the classpath.
— Section 8.2.1.6.2 of the JPA 2.1 Specification
Therefore, the mapping files can reside in the application jar artifacts, or they can be stored in an external folder location with
the cogitation that that location be included in the classpath.
Hibernate is more lenient in this regard so you can use any external location even outside of the application configured classpath.
<persistence xmlns="http://xmlns.jcp.org/xml/ns/persistence"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence
http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd"
version="2.1">
<persistence-unit name="CRM">
<description>
Persistence unit for Hibernate User Guide
</description>
<provider> org.hibernate.jpa.HibernatePersistenceProvider</provider>
<mapping-file> file:///etc/opt/app/mappings/orm.xml</mapping-file>
<properties>
<property name="javax.persistence.jdbc.driver"
value="org.h2.Driver" />
<property name="javax.persistence.jdbc.url"
value="jdbc:h2:mem:db1;DB_CLOSE_DELAY=-1;MVCC=TRUE" />
<property name="javax.persistence.jdbc.user"
value="sa" />
<property name="javax.persistence.jdbc.password"
value="" />
<property name="hibernate.show_sql"
value="true" />
<property name="hibernate.hbm2ddl.auto"
value="update" />
</properties>
</persistence-unit>
</persistence>
In the persistence.xml configuration file above, the orm.xml XML file containing all JPA entity mappings is located in the
/etc/opt/app/mappings/ folder.
When using Hibernate as a JPA provider, the EntityManagerFactory is backed by a SessionFactory . For this reason, you
might still want to use the Metadata object to pass various settings which cannot be supplied via the standard Hibernate
configuration settings.
@Override
public void contribute(MetadataBuilder metadataBuilder) {
metadataBuilder.applySqlFunction(
"instr", new StandardSQLFunction ( "instr", StandardBasicTypes .STRING )
);
}
}
The above MetadataBuilderContributor is used to register a SqlFuction which is not defined by the currently running
Hibernate Dialect , but which we need to reference in our JPQL queries.
You can then pass the custom MetadataBuilderContributor via the hibernate.metadata_builder_contributor
configuration property as explained in the configuration chapter
4. Schema generation
Hibernate allows you to generate the database from the entity mappings.
Although the automatic schema generation is very useful for testing and prototyping purposes, in a
production environment, it’s much more flexible to manage the schema using incremental migration
scripts.
Traditionally, the process of generating schema from entity mapping has been called HBM2DDL . To get a list of Hibernate-native
and JPA-specific configuration properties consider reading the Configurations section.
@Entity(name = "Customer")
public class Customer {
@Id
private Integer id;
@Lob
@Basic( fetch = FetchType .LAZY )
@LazyGroup( "lobs" )
private Blob image;
@Entity(name = "Person")
public static class Person {
@Id
private Long id;
@OneToMany(mappedBy = "author")
private List<Book> books = new ArrayList <>();
@Entity(name = "Book")
public static class Book {
@Id
private Long id;
@NaturalId
private String isbn;
@ManyToOne
private Person author;
If the hibernate.hbm2ddl.auto configuration is set to create , Hibernate is going to generate the following database schema:
JAVA
create sequence book_sequence start with 1 increment by 1
XML
<property
name="hibernate.hbm2ddl.import_files"
value="schema-generation.sql" />
Hibernate is going to execute the script file after the schema is automatically generated.
JAVA
<?xml version="1.0"?>
<!DOCTYPE hibernate-mapping PUBLIC
"-//Hibernate/Hibernate Mapping DTD 3.0//EN"
"http://www.hibernate.org/dtd/hibernate-mapping-3.0.dtd" >
<hibernate-mapping>
<database-object>
<create>
CREATE OR REPLACE FUNCTION sp_count_books(
IN authorId bigint,
OUT bookCount bigint)
RETURNS bigint AS
$BODY$
BEGIN
SELECT COUNT(*) INTO bookCount
FROM book
WHERE author_id = authorId;
END;
$BODY$
LANGUAGE plpgsql;
</create>
<drop></drop>
<dialect-scope name="org.hibernate.dialect.PostgreSQL95Dialect" />
</database-object>
</hibernate-mapping>
When the SessionFactory is bootstrapped, Hibernate is going to execute the database-object , therefore creating the
sp_count_books function.
JAVA
@Entity(name = "Book")
@Check( constraints = "CASE WHEN isbn IS NOT NULL THEN LENGTH(isbn) = 13 ELSE true END")
public static class Book {
@Id
private Long id;
@NaturalId
private String isbn;
Now, if you try to add a Book entity with an isbn attribute whose length is not 13 characters, a
ConstraintViolationException is going to be thrown.
JAVA
Book book = new Book();
book.setId( 1L );
book.setPrice( 49.99d );
book.setTitle( "High-Performance Java Persistence" );
book.setIsbn( "11-11-2016" );
entityManager.persist( book );
SQL
INSERT INTO Book (isbn, price, title, id)
VALUES ('11-11-2016', 49.99, 'High-Performance Java Persistence', 1)
JAVA
@Entity(name = "Person")
@DynamicInsert
public static class Person {
@Id
private Long id;
@ColumnDefault("'N/A'")
private String name;
@ColumnDefault("-1")
private Long clientId;
SQL
CREATE TABLE Person (
id BIGINT NOT NULL,
clientId BIGINT DEFAULT -1,
name VARCHAR(255) DEFAULT 'N/A',
PRIMARY KEY (id)
)
In the mapping above, both the name and clientId table columns are going to use a DEFAULT value.
The entity is annotated with the @DynamicInsert annotation so that the INSERT statement does not
include the entity attribute that have not been set.
This way, when omitting the name and the clientId attribute, the database is going to set them according to their default
values.
JAVA
doInJPA( this::entityManagerFactory, entityManager -> {
Person person = new Person ();
person.setId( 1L );
entityManager.persist( person );
} );
doInJPA( this::entityManagerFactory, entityManager -> {
Person person = entityManager.find( Person .class , 1L );
assertEquals( "N/A", person.getName() );
assertEquals( Long.valueOf( -1L ), person.getClientId() );
} );
SQL
INSERT INTO Person (id) VALUES (?)
Considering the following entity mapping, Hibernate generates the unique constraint DDL when creating the database schema:
@Entity
@Table(
name = "book",
uniqueConstraints = @UniqueConstraint(
name = "uk_book_title_author",
columnNames = {
"title",
"author_id"
}
)
)
public static class Book {
@Id
@GeneratedValue
private Long id;
@Entity
@Table(name = "author")
public static class Author {
@Id
@GeneratedValue
private Long id;
@Column(name = "first_name")
private String firstName;
@Column(name = "last_name")
private String lastName;
With the uk_book_title_author unique constraint in place, it’s no longer possible to add two books with the same title and for
the same author.
JAVA
Author _author = doInJPA( this::entityManagerFactory, entityManager -> {
Author author = new Author ();
author.setFirstName( "Vlad" );
author.setLastName( "Mihalcea" );
entityManager.persist( author );
return author;
} );
try {
doInJPA( this::entityManagerFactory, entityManager -> {
Book book = new Book();
book.setTitle( "High-Performance Java Persistence" );
book.setAuthor( _author );
entityManager.persist( book );
} );
}
catch (Exception expected) {
assertNotNull( ExceptionUtil .findCause( expected, ConstraintViolationException .class ) );
}
insert
into
author
(first_name, last_name, id)
values
(?, ?, ?)
insert
into
book
(author_id, title, id)
values
(?, ?, ?)
insert
into
book
(author_id, title, id)
values
(?, ?, ?)
The second INSERT statement fails because of the unique constraint violation.
Considering the following entity mapping, Hibernate generates the index when creating the database schema:
@Entity
@Table(
name = "author",
indexes = @Index(
name = "idx_author_first_last_name",
columnList = "first_name, last_name",
unique = false
)
)
public static class Author {
@Id
@GeneratedValue
private Long id;
@Column(name = "first_name")
private String firstName;
@Column(name = "last_name")
private String lastName;
SQL
create table author (
id bigint not null,
first_name varchar(255),
last_name varchar(255),
primary key (id)
)
5. Persistence Context
Both the org.hibernate.Session API and javax.persistence.EntityManager API represent a context for dealing with
persistent data. This concept is called a persistence context . Persistent data has a state in relation to both a persistence
context and the underlying database.
transient
the entity has just been instantiated and is not associated with a persistence context. It has no persistent representation in the
database and typically no identifier value has been assigned (unless the assigned generator was used).
managed , or persistent
the entity has an associated identifier and is associated with a persistence context. It may or may not physically exist in the
database yet.
detached
the entity has an associated identifier but is no longer associated with a persistence context (usually because the persistence
context was closed or the instance was evicted from the context)
removed
the entity has an associated identifier and is associated with a persistence context, however, it is scheduled for removal from
the database.
Much of the org.hibernate.Session and javax.persistence.EntityManager methods deal with moving entities between
these states.
JAVA
Session session = entityManager.unwrap( Session .class );
SessionImplementor sessionImplementor = entityManager.unwrap( SessionImplementor .class );
5.2.1. Capabilities
Hibernate supports the enhancement of an application Java domain model for the purpose of adding various persistence-related
capabilities directly into the class.
Lazy attributes can be designated to be loaded together, and this is called a "lazy group". By default, all singular attributes are part
of a single group, meaning that when one lazy singular attribute is accessed all lazy singular attributes are loaded. Lazy plural
attributes, by default, are each a lazy group by themselves. This behavior is explicitly controllable through the
@org.hibernate.annotations.LazyGroup annotation.
@Entity
public class Customer {
@Id
private Integer id;
@Lob
@Basic( fetch = FetchType .LAZY )
@LazyGroup( "lobs" )
private Blob image;
In the above example, we have 2 lazy attributes: accountsPayableXrefId and image . Each is part of a different fetch group
(accountsPayableXrefId is part of the default fetch group), which means that accessing accountsPayableXrefId will not force
the loading of the image attribute, and vice-versa.
As a hopefully temporary legacy hold-over, it is currently required that all lazy singular associations
(many-to-one and one-to-one) also include @LazyToOne(LazyToOneOption.NO_PROXY) . The plan is to
relax that requirement later.
If your application does not need to care about "internal state changing data-type" use cases, bytecode-enhanced dirty tracking
might be a worthwhile alternative to consider, especially in terms of performance. In this approach Hibernate will manipulate
the bytecode of your classes to add "dirty tracking" directly to the entity, allowing the entity itself to keep track of which of its
attributes have changed. During the flush time, Hibernate asks your entity what has changed rather than having to perform the
state-diff calculations.
@Entity(name = "Person")
public static class Person {
@Id
private Long id;
@OneToMany(mappedBy = "author")
private List<Book> books = new ArrayList <>();
@Entity(name = "Book")
public static class Book {
@Id
private Long id;
@NaturalId
private String isbn;
@ManyToOne
private Person author;
JAVA
Person person = new Person ();
person.setName( "John Doe" );
This blows up in normal Java usage. The correct normal Java usage is:
book.getAuthor().getName();
Bytecode-enhanced bi-directional association management makes that first example work by managing the "other side" of a bi-
directional association whenever one side is manipulated.
Runtime enhancement
Currently, runtime enhancement of the domain model is only supported in managed JPA environments following the JPA-defined
SPI for performing class transformations.
Even then, this support is disabled by default. To enable runtime enhancement, specify one of the following configuration
properties:
Also, at the moment, only annotated classes are supported for runtime enhancement.
Gradle plugin
Hibernate provides a Gradle plugin that is capable of providing build-time enhancement of the domain model as they are
compiled as part of a Gradle build. To use the plugin, a project would first need to apply it:
ext {
hibernateVersion = 'hibernate-version-you-want'
}
buildscript {
dependencies {
classpath "org.hibernate:hibernate-gradle-plugin:$hibernateVersion"
}
}
hibernate {
enhance {
// any configuration goes here
}
}
The configuration that is available is exposed through a registered Gradle DSL extension:
enableLazyInitialization
Whether enhancement for lazy attribute loading should be done.
enableDirtyTracking
Whether enhancement for self-dirty tracking should be done.
enableAssociationManagement
Whether enhancement for bi-directional association management should be done.
The enhance { } block is required in order for enhancement to occur. Enhancement is disabled by default in preparation for
additions capabilities (hbm2ddl, etc) in the plugin.
Maven plugin
Hibernate provides a Maven plugin capable of providing build-time enhancement of the domain model as they are compiled as
part of a Maven build. See the section on the Gradle plugin for details on the configuration settings. Again, the default for those 3
is false .
The Maven plugin supports one additional configuration settings: failOnError, which controls what happens in case of error. The
default behavior is to fail the build, but it can be set so that only a warning is issued.
<build>
<plugins>
[...]
<plugin>
<groupId> org.hibernate.orm.tooling</groupId>
<artifactId> hibernate-enhance-maven-plugin</artifactId>
<version> $currentHibernateVersion</version>
<executions>
<execution>
<configuration>
<failOnError> true</failOnError>
<enableLazyInitialization> true</enableLazyInitialization>
<enableDirtyTracking> true</enableDirtyTracking>
<enableAssociationManagement> true</enableAssociationManagement>
</configuration>
<goals>
<goal> enhance</goal>
</goals>
</execution>
</executions>
</plugin>
[...]
</plugins>
</build>
JAVA
Person person = new Person ();
person.setId( 1L );
person.setName("John Doe");
entityManager.persist( person );
JAVA
Person person = new Person ();
person.setId( 1L );
person.setName("John Doe");
session.save( person );
org.hibernate.Session also has a method named persist which follows the exact semantic defined in the JPA specification for
the persist method. It is this org.hibernate.Session method to which the Hibernate javax.persistence.EntityManager
implementation delegates.
If the DomesticCat entity type has a generated identifier, the value is associated with the instance when the save or persist is
called. If the identifier is not automatically generated, the manually assigned (usually natural) key value has to be set on the
instance before the save or persist methods are called.
JAVA
entityManager.remove( person );
JAVA
session.delete ( person );
Hibernate itself can handle deleting detached state. JPA, however, disallows it. The implication here is
that the entity instance passed to the org.hibernate.Session delete method can be either in
managed or detached state, while the entity instance passed to remove on
javax.persistence.EntityManager must be in the managed state.
Example 316. Obtaining an entity reference without initializing its data with JPA
JAVA
Book book = new Book();
book.setAuthor( entityManager.getReference( Person .class , personId ) );
Example 317. Obtaining an entity reference without initializing its data with Hibernate API
JAVA
Book book = new Book();
book.setId( 1L );
book.setIsbn( "123-456-7890" );
entityManager.persist( book );
book.setAuthor( session.load( Person .class , personId ) );
The above works on the assumption that the entity is defined to allow lazy loading, generally through use of runtime proxies. In
both cases an exception will be thrown later if the given entity does not refer to actual database state when the application
attempts to use the returned proxy in any way that requires access to its data.
Unless the entity class is declared final , the proxy extends the entity class. If the entity class is
final , the proxy will implement an interface instead. See the @Proxy mapping section for more info.
Example 318. Obtaining an entity reference with its data initialized with JPA
JAVA
Person person = entityManager.find( Person .class , personId );
Example 319. Obtaining an entity reference with its data initialized with Hibernate API
JAVA
Person person = session.get( Person .class , personId );
Example 320. Obtaining an entity reference with its data initialized using the byId() Hibernate API
JAVA
Person person = session.byId( Person .class ).load( personId );
Example 321. Obtaining an Optional entity reference with its data initialized using the byId()
Hibernate API
JAVA
Optional <Person > optionalPerson = session.byId( Person .class ).loadOptional( personId );
@Entity(name = "Book")
public static class Book {
@Id
private Long id;
@NaturalId
private String isbn;
@ManyToOne
private Person author;
We can also opt to fetch the entity or just retrieve a reference to it when using the natural identifier loading methods.
JAVA
Book book = session.bySimpleNaturalId( Book.class ).getReference( isbn );
JAVA
Book book = session
.byNaturalId( Book.class )
.using ( "isbn", isbn )
.load( );
We can also use a Java 8 Optional to load an entity by its natural id:
JAVA
Optional <Book> optionalBook = session
.byNaturalId( Book.class )
.using ( "isbn", isbn )
.loadOptional( );
Hibernate offers a consistent API for accessing persistent data by identifier or by the natural-id. Each of these defines the same
two data access methods:
getReference
Should be used in cases where the identifier is assumed to exist, where non-existence would be an actual error. Should never
be used to test existence. That is because this method will prefer to create and return a proxy if the data is not already
associated with the Session rather than hit the database. The quintessential use-case for using this method is to create foreign
key based associations.
load
Will return the persistent data associated with the given identifier value or null if that identifier does not exist.
Each of these two methods defines an overloading variant accepting a org.hibernate.LockOptions argument. Locking is
discussed in a separate chapter.
JAVA
Person person = entityManager.find( Person .class , personId );
person.setName("John Doe");
entityManager.flush();
JAVA
Person person = session.byId( Person .class ).load( personId );
person.setName("John Doe");
session.flush();
By default, when you modify an entity, all columns but the identifier are being set during update.
JAVA
@Entity(name = "Product")
public static class Product {
@Id
private Long id;
@Column
private String name;
@Column
private String description;
@Column(name = "price_cents")
private Integer priceCents;
@Column
private Integer quantity;
JAVA
Product book = new Product ();
book.setId( 1L );
book.setName( "High-Performance Java Persistence" );
book.setDescription( "Get the most out of your persistence layer" );
book.setPriceCents( 29_99 );
book.setQuantity( 10_000 );
entityManager.persist( book );
When you modify the Product entity, Hibernate generates the following SQL UPDATE statement:
JAVA
doInJPA( this::entityManagerFactory, entityManager -> {
Product book = entityManager.find( Product .class , 1L );
book.setPriceCents( 24_99 );
} );
SQL
UPDATE
Product
SET
description = ?,
name = ?,
price_cents = ?,
quantity = ?
WHERE
id = ?
-- binding parameter [1] as [VARCHAR] - [Get the most out of your persistence layer]
-- binding parameter [2] as [VARCHAR] - [High-Performance Java Persistence ]
-- binding parameter [3] as [INTEGER] - [2499]
-- binding parameter [4] as [INTEGER] - [10000]
-- binding parameter [5] as [BIGINT] - [1]
The default UPDATE statement containing all columns has two advantages:
it allows you to enable batch updates even if multiple entities modify different properties.
However, there is also one downside to including all columns in the SQL UPDATE statement. If you have multiple indexes, the
database might update those redundantly even if you don’t actually modify all column values.
JAVA
@Entity(name = "Product")
@DynamicUpdate
public static class Product {
@Id
private Long id;
@Column
private String name;
@Column
private String description;
@Column(name = "price_cents")
private Integer priceCents;
@Column
private Integer quantity;
This time, when rerunning the previous test case, Hibernate generates the following SQL UPDATE statement:
SQL
UPDATE
Product
SET
price_cents = ?
WHERE
id = ?
The dynamic update allows you to set just the columns that were modified in the associated entity.
JAVA
Person person = entityManager.find( Person .class , personId );
entityManager.refresh( person );
assertEquals("JOHN DOE", person.getName() );
JAVA
Person person = session.byId( Person .class ).load( personId );
session.refresh( person );
assertEquals("JOHN DOE", person.getName() );
One case where this is useful is when it is known that the database state has changed since the data was read. Refreshing allows
the current database state to be pulled into the entity instance and the persistence context.
Another case where this might be useful is when database triggers are used to initialize some of the properties of the entity.
Only the entity instance and its value type collections are refreshed unless you specify REFRESH as a
cascade style of any associations. However, please note that Hibernate has the capability to handle
this automatically through its notion of generated properties. See the discussion of non-identifier
generated attributes.
Traditionally, Hibernate has been allowing detached entities to be refreshed. Unfortunately, JPA
prohibits this practice and specifies that an IllegalArgumentException should be thrown instead.
For this reason, when bootstrapping the Hibernate SessionFactory using the native API, the legacy
detached entity refresh behavior is going to be preserved. On the other hand, when bootstrapping Hibernate
through JPA EntityManagerFactory building process, detached entities are not allowed to be refreshed by
default.
For more about the hibernate.allow_refresh_detached_entity configuration property, check out the
Configurations section as well.
However, you have to be very careful when cascading the refresh action to any transient entity.
JAVA
try {
Person person = entityManager.find( Person .class , personId );
entityManager.refresh( person );
}
catch ( EntityNotFoundException expected ) {
log.info( "Beware when cascading the refresh associations to transient entities!" );
}
In the aforementioned example, an EntityNotFoundException is thrown because the Book entity is still in a transient state.
When the refresh action is cascaded from the Person entity, Hibernate will not be able to locate the Book entity in the database.
For this reason, you should be very careful when mixing the refresh action with transient child entity objects.
Detached data can still be manipulated, however, the persistence context will no longer automatically know about these
modifications, and the application will need to intervene to make the changes persistent again.
JPA does not provide for this model. This is only available through Hibernate org.hibernate.Session .
JAVA
Person person = session.byId( Person .class ).load( personId );
//Clear the Session so the person entity becomes detached
session.clear();
person.setName( "Mr. John Doe" );
JAVA
Person person = session.byId( Person .class ).load( personId );
//Clear the Session so the person entity becomes detached
session.clear();
person.setName( "Mr. John Doe" );
session.saveOrUpdate( person );
The method name update is a bit misleading here. It does not mean that an SQL UPDATE is
immediately performed. It does, however, mean that an SQL UPDATE will be performed when the
persistence context is flushed since Hibernate does not know its previous state against which to
compare for changes. If the entity is mapped with select-before-update , Hibernate will pull the current state
from the database and see if an update is needed.
Provided the entity is detached, update and saveOrUpdate operate exactly the same.
Although not exactly per se, the following example is a good visualization of the merge operation internals.
JAVA
public Person merge(Person detached) {
Person newReference = session.byId( Person .class ).load( detached.getId() );
newReference.setName( detached.getName() );
return newReference;
}
JAVA
Person person = entityManager.find( Person .class , personId );
//Clear the EntityManager so the person entity becomes detached
entityManager.clear();
person.setName( "Mr. John Doe" );
Merging gotchas
For example, Hibernate throws IllegalStateException when merging a parent entity which has references to 2 detached child
entities child1 and child2 (obtained from different sessions), and child1 and child2 represent the same persistent entity,
Child .
A new configuration property, hibernate.event.merge.entity_copy_observer , controls how Hibernate will respond when
multiple representations of the same persistent entity ("entity copy") is detected while merging.
allow
performs the merge operation on each entity copy that is detected
log
(provided for testing only) performs the merge operation on each entity copy that is detected and logs information about the
entity copies. This setting requires DEBUG logging be enabled for
org.hibernate.event.internal.EntityCopyAllowedLoggedObserver .
Because cascade order is undefined, the order in which the entity copies are merged is undefined. As
a result, if property values in the entity copies are not consistent, the resulting entity state will be
indeterminate, and data will be lost from all entity copies except for the last one merged. Therefore,
the last writer wins.
If an entity copy cascades the merge operation to an association that is (or contains) a new entity, that new
entity will be merged (i.e., persisted and the merge operation will be cascaded to its associations according to
its mapping), even if that same association is ultimately overwritten when Hibernate merges a different
representation having a different value for its association.
If the association is mapped with orphanRemoval = true , the new entity will not be deleted because the
semantics of orphanRemoval do not apply if the entity being orphaned is a new entity.
There are known issues when representations of the same persistent entity have different values for a
collection. See HHH-9239 (https://hibernate.atlassian.net/browse/HHH-9239) and HHH-9240
(https://hibernate.atlassian.net/browse/HHH-9240) for more details. These issues can cause data loss or corruption.
The only way to exclude particular entity classes or associations that contain critical data is to provide a custom
implementation of org.hibernate.event.spi.EntityCopyObserver with the desired behavior, and setting
hibernate.event.merge.entity_copy_observer to the class name.
Hibernate provides limited DEBUG logging capabilities that can help determine the entity classes for
which entity copies were found. By setting hibernate.event.merge.entity_copy_observer to log and
enabling DEBUG logging for org.hibernate.event.internal.EntityCopyAllowedLoggedObserver , the
following will be logged each time an application calls EntityManager.merge( entity ) or
Session.merge( entity ) :
number of times multiple representations of the same persistent entity was detected summarized by entity
name;
details by entity name and ID, including output from calling toString() on each representation being merged
as well as the merge result.
The log should be reviewed to determine if multiple representations of entities containing critical data are
detected. If so, the application should be modified so there is only one representation, and a custom
implementation of org.hibernate.event.spi.EntityCopyObserver should be provided to disallow entity copies
for entities with critical data.
Using optimistic locking is recommended to detect if different representations are from different versions of the
same persistent entity. If they are not from the same version, Hibernate will throw either the JPA
OptimisticLockException or the native StaleObjectStateException depending on your bootstrapping strategy.
JAVA
boolean contained = entityManager.contains( person );
JAVA
boolean contained = session.contains( person );
JAVA
PersistenceUnitUtil persistenceUnitUtil = entityManager.getEntityManagerFactory().getPersistenceUnitUtil();
JAVA
boolean personInitialized = Hibernate .isInitialized( person );
In JPA there is an alternative means to check laziness using the following javax.persistence.PersistenceUtil pattern (which
is recommended wherever possible).
JAVA
PersistenceUtil persistenceUnitUtil = Persistence .getPersistenceUtil();
When the flush() method is called, the state of the entity is synchronized with the database. If you do not want this
synchronization to occur, or if you are processing a huge number of objects and need to manage memory efficiently, the
evict() method can be used to remove the object and its collections from the first-level cache.
JAVA
for(Person person : entityManager.createQuery("select p from Person p", Person .class )
.getResultList()) {
dtos.add(toDTO(person));
entityManager.detach( person );
}
JAVA
Session session = entityManager.unwrap( Session .class );
for(Person person : (List<Person >) session.createQuery("select p from Person p").list()) {
dtos.add(toDTO(person));
session.evict( person );
}
To detach all entities from the current persistence context, both the EntityManager and the Hibernate Session define a
clear() method.
JAVA
entityManager.clear();
session.clear();
To verify if an entity instance is currently attached to the running persistence context, both the EntityManager and the
Hibernate Session define a contains(Object entity) method.
JAVA
entityManager.contains( person );
session.contains( person );
ALL
PERSIST
MERGE
REMOVE
REFRESH
DETACH
Additionally, the CascadeType.ALL will propagate any Hibernate-specific operation, which is defined by the
org.hibernate.annotations.CascadeType enum:
SAVE_UPDATE
REPLICATE
LOCK
The following examples will explain some of the aforementioned cascade operations using the following entities:
@Entity
public class Person {
@Id
private Long id;
@Entity
public class Phone {
@Id
private Long id;
@Column(name = "`number`")
private String number;
5.13.1. CascadeType.PERSIST
The CascadeType.PERSIST allows us to persist a child entity along with the parent one.
JAVA
Person person = new Person ();
person.setId( 1L );
person.setName( "John Doe" );
person.addPhone( phone );
entityManager.persist( person );
SQL
INSERT INTO Person ( name, id )
VALUES ( 'John Doe', 1 )
Even if just the Person parent entity was persisted, Hibernate has managed to cascade the persist operation to the associated
5.13.2. CascadeType.MERGE
The CascadeType.MERGE allows us to merge a child entity along with the parent one.
JAVA
Phone phone = entityManager.find( Phone .class , 1L );
Person person = phone.getOwner();
entityManager.clear();
entityManager.merge( person );
SQL
SELECT
p.id as id1_0_1_,
p.name as name2_0_1_,
ph.owner_id as owner_id3_1_3_,
ph.id as id1_1_3_,
ph.id as id1_1_0_,
ph."number" as number2_1_0_,
ph.owner_id as owner_id3_1_0_
FROM
Person p
LEFT OUTER JOIN
Phone ph
on p.id=ph.owner_id
WHERE
p.id = 1
During merge, the current state of the entity is copied onto the entity version that was just fetched from the database. That’s the
reason why Hibernate executed the SELECT statement which fetched both the Person entity along with its children.
5.13.3. CascadeType.REMOVE
The CascadeType.REMOVE allows us to remove a child entity along with the parent one. Traditionally, Hibernate called this
operation delete, that’s why the org.hibernate.annotations.CascadeType provides a DELETE cascade option. However,
CascadeType.REMOVE and org.hibernate.annotations.CascadeType.DELETE are identical.
JAVA
Person person = entityManager.find( Person .class , 1L );
entityManager.remove( person );
SQL
DELETE FROM Phone WHERE id = 1
5.13.4. CascadeType.DETACH
CascadeType.DETACH is used to propagate the detach operation from a parent entity to a child.
JAVA
Person person = entityManager.find( Person .class , 1L );
assertEquals( 1, person.getPhones().size() );
Phone phone = person.getPhones().get( 0 );
entityManager.detach( person );
5.13.5. CascadeType.LOCK
Although unintuitively, CascadeType.LOCK does not propagate a lock request from a parent entity to its children. Such a use case
requires the use of the PessimisticLockScope.EXTENDED value of the javax.persistence.lock.scope property.
However, CascadeType.LOCK allows us to reattach a parent entity along with its children to the currently running Persistence
Context.
JAVA
Person person = entityManager.find( Person .class , 1L );
assertEquals( 1, person.getPhones().size() );
Phone phone = person.getPhones().get( 0 );
entityManager.detach( person );
5.13.6. CascadeType.REFRESH
The CascadeType.REFRESH is used to propagate the refresh operation from a parent entity to a child. The refresh operation will
discard the current entity state, and it will override it using the one loaded from the database.
entityManager.refresh( person );
SQL
SELECT
p.id as id1_0_1_,
p.name as name2_0_1_,
ph.owner_id as owner_id3_1_3_,
ph.id as id1_1_3_,
ph.id as id1_1_0_,
ph."number" as number2_1_0_,
ph.owner_id as owner_id3_1_0_
FROM
Person p
LEFT OUTER JOIN
Phone ph
ON p.id=ph.owner_id
WHERE
p.id = 1
In the aforementioned example, you can see that both the Person and Phone entities are refreshed even if we only called this
operation on the parent entity only.
5.13.7. CascadeType.REPLICATE
The CascadeType.REPLICATE is to replicate both the parent and the child entities. The replicate operation allows you to
synchronize entities coming from different sources of data.
JAVA
Person person = new Person ();
person.setId( 1L );
person.setName( "John Doe Sr." );
SELECT
id
FROM
Person
WHERE
id = 1
SELECT
id
FROM
Phone
WHERE
id = 1
UPDATE
Person
SET
name = 'John Doe Sr.'
WHERE
id = 1
UPDATE
Phone
SET
"number" = '(01) 123-456-7890',
owner_id = 1
WHERE
id = 1
As illustrated by the SQL statements being generated, both the Person and Phone entities are replicated to the underlying
database rows.
So, when annotating the @ManyToOne association with @OnDelete( action = OnDeleteAction.CASCADE ) , the automatic
schema generator will apply the ON DELETE CASCADE SQL directive to the Foreign Key declaration, as illustrated by the following
example.
JAVA
@Entity(name = "Person")
public static class Person {
@Id
private Long id;
@Entity(name = "Phone")
public static class Phone {
@Id
private Long id;
@Column(name = "`number`")
private String number;
SQL
create table Person (
id bigint not null,
name varchar(255),
primary key (id)
)
Now, you can just remove the Person entity, and the associated Phone is going to be removed automatically.
JAVA
Person person = entityManager.find( Person .class , 1L );
entityManager.remove( person );
SQL
delete from Person where id = ?
Certain methods of the JPA EntityManager or the Hibernate Session will not leave the Persistence Context in a consistent state.
As a rule of thumb, no exception thrown by Hibernate can be treated as recoverable. Ensure that the Session will be closed by
Rolling back the database transaction does not put your business objects back into the state they were at the start of the
transaction. This means that the database state and the business objects will be out of sync. Usually, this is not a problem because
exceptions are not recoverable and you will have to start over after rollback anyway.
Both the PersistenceException and the HibernateException are runtime exceptions because, in our opinion, we should not
force the application developer to catch an unrecoverable exception at a low layer. In most systems, unchecked and fatal
exceptions are handled in one of the first frames of the method call stack (i.e., in higher layers) and either an error message is
presented to the application user or some other appropriate action is taken. Note that Hibernate might also throw other
unchecked exceptions that are not a HibernateException . These are not recoverable either, and appropriate action should be
taken.
Hibernate wraps the JDBC SQLException , thrown while interacting with the database, in a JDBCException
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/JDBCException.html). In fact, Hibernate will attempt to convert the
exception into a more meaningful subclass of JDBCException . The underlying SQLException is always available via
JDBCException.getSQLException()
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/JDBCException.html#getSQLException--). Hibernate converts the
SQLException into an appropriate JDBCException subclass using the SQLExceptionConverter
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/exception/spi/SQLExceptionConverter.html) attached to the current
SessionFactory .
By default, the SQLExceptionConverter is defined by the configured Hibernate Dialect via the
buildSQLExceptionConversionDelegate
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/dialect/Dialect.html#buildSQLExceptionConversionDelegate--) method which is
overridden by several database-specific Dialects .
ConstraintViolationException
indicates some form of integrity constraint violation.
DataException
indicates that evaluation of the valid SQL statement against the given data resulted in some illegal operation, mismatched
types, truncation or incorrect cardinality.
GenericJDBCException
a generic exception which did not fall into any of the other categories.
JDBCConnectionException
indicates an error with the underlying JDBC communication.
LockAcquisitionException
indicates an error acquiring a lock level necessary to perform the requested operation.
LockTimeoutException
indicates that the lock acquisition request has timed out.
PessimisticLockException
indicates that a lock acquisition request has failed.
QueryTimeoutException
indicates that the current executing query has timed out.
SQLGrammarException
indicates a grammar or syntax problem with the issued SQL.
Starting with Hibernate 5.2, the Hibernate Session extends the JPA EntityManager . For this reason,
when a SessionFactory is built via Hibernate’s native bootstrapping, the HibernateException or
SQLException can be wrapped in a JPA PersistenceException
(https://docs.oracle.com/javaee/7/api/javax/persistence/PersistenceException.html) when thrown by Session
methods that implement EntityManager methods (e.g., Session.merge(Object object)
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/Session.html#merge-java.lang.Object-),
Session.flush() (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/Session.html#flush--)).
If your SessionFactory is built via Hibernate’s native bootstrapping, and you don’t want the Hibernate
exceptions to be wrapped in the JPA PersistenceException , you need to set the
hibernate.native_exception_handling_51_compliance configuration property to true . See the
hibernate.native_exception_handling_51_compliance configuration property for more details.
6. Flushing
Flushing is the process of synchronizing the state of the persistence context with the underlying database. The EntityManager
and the Hibernate Session expose a set of methods, through which the application developer can change the persistent state of
an entity.
The persistence context acts as a transactional write-behind cache, queuing any entity state change. Like any write-behind cache,
changes are first applied in-memory and synchronized with the database during the flush time. The flush operation takes every
entity state change and translates it to an INSERT , UPDATE or DELETE statement.
Because DML statements are grouped together, Hibernate can apply batching transparently. See the
Batching chapter for more information.
ALWAYS
Flushes the Session before every query.
AUTO
This is the default mode, and it flushes the Session only if necessary.
COMMIT
The Session tries to delay the flush until the current Transaction is committed, although it might flush prematurely too.
MANUAL
The Session flushing is delegated to the application, which must call Session.flush() explicitly in order to apply the
persistence context changes.
prior to executing a JPQL/HQL query that overlaps with the queued entity actions
before executing any native SQL query that has no registered synchronization
JAVA
entityManager = entityManagerFactory().createEntityManager();
txn = entityManager.getTransaction();
txn.begin ();
txn.commit();
SQL
--INFO: Entity is in persisted state
INSERT INTO Person (name, id) VALUES ('John Doe', 1)
Hibernate logs the message prior to inserting the entity because the flush only occurred during transaction commit.
This is valid for the SEQUENCE and TABLE identifier generators. The IDENTITY generator must execute
the insert right after calling persist() . For details, see the discussion of generators in Identifier
generators.
JAVA
Person person = new Person ( "John Doe" );
entityManager.persist( person );
entityManager.createQuery( "select p from Advertisement p" ).getResultList();
entityManager.createQuery( "select p from Person p" ).getResultList();
SQL
SELECT a.id AS id1_0_ ,
a.title AS title2_0_
FROM Advertisement a
The reason why the Advertisement entity query didn’t trigger a flush is that there’s no overlapping between the
Advertisement and the Person tables:
@Entity(name = "Person")
public static class Person {
@Id
@GeneratedValue
private Long id;
@Entity(name = "Advertisement")
public static class Advertisement {
@Id
@GeneratedValue
private Long id;
When querying for a Person entity, the flush is triggered prior to executing the entity query.
JAVA
Person person = new Person ( "John Doe" );
entityManager.persist( person );
entityManager.createQuery( "select p from Person p" ).getResultList();
SQL
INSERT INTO Person (name, id) VALUES ('John Doe', 1)
This time, the flush was triggered by a JPQL query because the pending entity persists action overlaps with the query being
executed.
assertTrue(((Number ) entityManager
.createNativeQuery( "select count(*) from Person")
.getSingleResult()).intValue() == 0 );
assertTrue(((Number ) entityManager
.createNativeQuery( "select count(*) from Person")
.getSingleResult()).intValue() == 1 );
If you bootstrap Hibernate natively, and not through JPA, by default, the Session API will trigger a flush automatically when
executing a native query.
JAVA
assertTrue(((Number ) session
.createNativeQuery( "select count(*) from Person")
.getSingleResult()).intValue() == 0 );
assertTrue(((Number ) session
.createNativeQuery( "select count(*) from Person")
.uniqueResult()).intValue() == 0 );
JAVA
assertTrue(((Number ) entityManager
.createNativeQuery( "select count(*) from Person")
.getSingleResult()).intValue() == 0 );
assertTrue(((Number ) session
.createNativeQuery( "select count(*) from Person")
.addSynchronizedEntityClass( Person .class )
.uniqueResult()).intValue() == 1 );
“ If FlushModeType.COMMIT is set, the effect of updates made to entities in the persistence context upon
queries is unspecified.
— Section 3.10.8 of the JPA 2.1 Specification
When executing a JPQL query, the persistence context is only flushed when the current running transaction is committed.
JAVA
Person person = new Person ("John Doe");
entityManager.persist(person);
SQL
SELECT a.id AS id1_0_ ,
a.title AS title2_0_
FROM Advertisement a
Because the JPA doesn’t impose a strict rule on delaying flushing, when executing a native SQL query, the persistence context is
going to be flushed.
JAVA
Person person = new Person ("John Doe");
entityManager.persist(person);
assertTrue(((Number ) entityManager
.createNativeQuery("select count(*) from Person")
.getSingleResult()).intValue() == 1);
SQL
INSERT INTO Person (name, id) VALUES ('John Doe', 1)
The ALWAYS flush mode triggers a persistence context flush even when executing a native SQL query against the Session API.
JAVA
Person person = new Person ("John Doe");
entityManager.persist(person);
SQL
INSERT INTO Person (name, id) VALUES ('John Doe', 1)
JAVA
Person person = new Person ("John Doe");
entityManager.persist(person);
assertTrue(((Number ) entityManager
.createQuery("select count(id) from Person")
.getSingleResult()).intValue() == 0);
assertTrue(((Number ) session
.createNativeQuery("select count(*) from Person")
.uniqueResult()).intValue() == 0);
SQL
SELECT COUNT(p.id) AS col_0_0_
FROM Person p
SELECT COUNT(*)
FROM Person
The INSERT statement was not executed because the persistence context because there was no manual flush() call.
This mode is useful when using multi-request logical transactions, and only the last request should
flush the persistence context.
INSERT
The INSERT statement is generated either by the EntityInsertAction or EntityIdentityInsertAction . These actions are
scheduled by the persist operation, either explicitly or through cascading the PersistEvent from a parent to a child entity.
DELETE
UPDATE
The UPDATE statement is generated by EntityUpdateAction during flushing if the managed entity has been marked
modified. The dirty checking mechanism is responsible for determining if a managed entity has been modified since it was first
loaded.
Hibernate does not execute the SQL statements in the order of their associated entity state operations.
JAVA
Person person = entityManager.find( Person .class , 1L);
entityManager.remove(person);
SQL
INSERT INTO Person (name, id)
VALUES ('John Doe', 2L)
Even if we removed the first entity and then persist a new one, Hibernate is going to execute the DELETE statement after the
INSERT .
The order in which SQL statements are executed is given by the ActionQueue and not by the order in
which entity state operations have been previously defined.
1. OrphanRemovalAction
2. EntityInsertAction or EntityIdentityInsertAction
3. EntityUpdateAction
4. CollectionRemoveAction
5. CollectionUpdateAction
6. CollectionRecreateAction
7. EntityDeleteAction
7. Database access
7.1. ConnectionProvider
As an ORM tool, probably the single most important thing you need to tell Hibernate is how to connect to your database so that it
may connect on behalf of your application. This is ultimately the function of the
org.hibernate.engine.jdbc.connections.spi.ConnectionProvider interface. Hibernate provides some out of the box
implementations of this interface. ConnectionProvider is also an extension point so you can also use custom implementations
from third parties or written yourself. The ConnectionProvider to use is defined by the
hibernate.connection.provider_class setting. See the org.hibernate.cfg.AvailableSettings#CONNECTION_PROVIDER
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/cfg/AvailableSettings.html#CONNECTION_PROVIDER)
Generally speaking, applications should not have to configure a ConnectionProvider explicitly if using one of the Hibernate-
provided implementations. Hibernate will internally determine which ConnectionProvider to use based on the following
algorithm:
To use this integration, the application must include the hibernate-c3p0 module jar (as well as its
dependencies) on the classpath.
Hibernate also provides support for applications to use c3p0 (http://www.mchange.com/projects/c3p0/) connection pooling. When using
this c3p0 support, a number of additional configuration settings are recognized.
Transaction isolation of the Connections is managed by the ConnectionProvider itself. See ConnectionProvider support for
transaction isolation setting.
hibernate.connection.driver_class
hibernate.connection.url
Any settings prefixed with hibernate.connection. (other than the "special ones")
These all have the hibernate.connection. prefix stripped and the rest will be passed as JDBC connection properties
hibernate.c3p0.min_size or c3p0.minPoolSize
The minimum size of the c3p0 pool. See c3p0 minPoolSize (http://www.mchange.com/projects/c3p0/#minPoolSize)
hibernate.c3p0.max_size or c3p0.maxPoolSize
The maximum size of the c3p0 pool. See c3p0 maxPoolSize (http://www.mchange.com/projects/c3p0/#maxPoolSize)
hibernate.c3p0.timeout or c3p0.maxIdleTime
hibernate.c3p0.max_statements or c3p0.maxStatements
Controls the c3p0 PreparedStatement cache size (if using). See c3p0 maxStatements
(http://www.mchange.com/projects/c3p0/#maxStatements)
hibernate.c3p0.acquire_increment or c3p0.acquireIncrement
Number of connections c3p0 should acquire at a time when the pool is exhausted. See c3p0 acquireIncrement
(http://www.mchange.com/projects/c3p0/#acquireIncrement)
hibernate.c3p0.idle_test_period or c3p0.idleConnectionTestPeriod
Idle time before a c3p0 pooled connection is validated. See c3p0 idleConnectionTestPeriod
(http://www.mchange.com/projects/c3p0/#idleConnectionTestPeriod)
hibernate.c3p0.initialPoolSize
The initial c3p0 pool size. If not specified, default is to use the min pool size. See c3p0 initialPoolSize
(http://www.mchange.com/projects/c3p0/#initialPoolSize)
To use this integration, the application must include the hibernate-proxool module jar (as well as its
dependencies) on the classpath.
Hibernate also provides support for applications to use Proxool (http://proxool.sourceforge.net/) connection pooling.
Transaction isolation of the Connections is managed by the ConnectionProvider itself. See ConnectionProvider support for
transaction isolation setting.
To use this integration, the application must include the hibernate-hikari module jar (as well as its
dependencies) on the classpath.
Hibernate also provides support for applications to use Hikari (http://brettwooldridge.github.io/HikariCP/) connection pool.
Set all of your Hikari settings in Hibernate prefixed by hibernate.hikari. and this ConnectionProvider will pick them up
and pass them along to Hikari. Additionally, this ConnectionProvider will pick up the following Hibernate-specific properties
and map them to the corresponding Hikari ones (any hibernate.hikari. prefixed ones have precedence):
hibernate.connection.driver_class
hibernate.connection.url
hibernate.connection.username
hibernate.connection.password
hibernate.connection.isolation
Mapped to Hikari’s transactionIsolation setting. See ConnectionProvider support for transaction isolation setting. Note that
Hikari only supports JDBC standard isolation levels (apparently).
hibernate.connection.autocommit
To use this integration, the application must include the hibernate-vibur module jar (as well as its
dependencies) on the classpath.
Hibernate also provides support for applications to use Vibur DBCP (http://www.vibur.org/) connection pool.
Set all of your Vibur settings in Hibernate prefixed by hibernate.vibur. and this ConnectionProvider will pick them up and
pass them along to Vibur DBCP. Additionally, this ConnectionProvider will pick up the following Hibernate-specific properties
and map them to the corresponding Vibur ones (any hibernate.vibur. prefixed ones have precedence):
hibernate.connection.driver_class
hibernate.connection.url
hibernate.connection.username
hibernate.connection.password
hibernate.connection.isolation
Mapped to Vibur’s defaultTransactionIsolationValue setting. See ConnectionProvider support for transaction isolation
setting.
hibernate.connection.autocommit
To use this integration, the application must include the hibernate-agroal module jar (as well as its
dependencies) on the classpath.
Hibernate also provides support for applications to use Agroal (http://agroal.github.io/) connection pool.
Set all of your Agroal settings in Hibernate prefixed by hibernate.agroal. and this ConnectionProvider will pick them up
and pass them along to Agroal connection pool. Additionally, this ConnectionProvider will pick up the following Hibernate-
specific properties and map them to the corresponding Agroal ones (any hibernate.agroal. prefixed ones have precedence):
hibernate.connection.driver_class
hibernate.connection.url
hibernate.connection.username
hibernate.connection.password
hibernate.connection.isolation
Mapped to Agroal’s jdbcTransactionIsolation setting. See ConnectionProvider support for transaction isolation setting.
hibernate.connection.autocommit
The built-in connection pool is not supported for use in a production system.
the name of the java.sql.Connection constant field representing the isolation you would like to use. For example,
TRANSACTION_REPEATABLE_READ for java.sql.Connection#TRANSACTION_REPEATABLE_READ
(https://docs.oracle.com/javase/8/docs/api/java/sql/Connection.html#TRANSACTION_REPEATABLE_READ). Not that this is only supported for
JDBC standard isolation levels, not for isolation levels specific to a particular JDBC driver.
a short-name version of the java.sql.Connection constant field without the TRANSACTION_ prefix. For example,
REPEATABLE_READ for java.sql.Connection#TRANSACTION_REPEATABLE_READ
(https://docs.oracle.com/javase/8/docs/api/java/sql/Connection.html#TRANSACTION_REPEATABLE_READ). Again, this is only supported for
JDBC standard isolation levels, not for isolation levels specific to a particular JDBC driver.
In most cases, Hibernate will be able to determine the proper Dialect to use by asking some questions of the JDBC Connection
during bootstrap. For information on Hibernate’s ability to determine the proper Dialect to use (and your ability to influence that
resolution), see Dialect resolution.
If for some reason it is not able to determine the proper one or you want to use a custom Dialect, you will need to set the
hibernate.dialect setting.
CUBRID Support for the CUBRID database, version 8.3. May work with later versions.
DB2390 Support for DB2 Universal Database for OS/390, also known as DB2/390.
DB2400 Support for DB2 Universal Database for iSeries, also known as DB2/400.
HANAColumnStore Support for the SAP HANA database column store. This is the recommended dialect for the
SAP HANA database.
Ingres9 Support for the Ingres database, version 9.3. May work with newer versions
Ingres10 Support for the Ingres database, version 10. May work with newer versions
Mimer Support for the Mimer database, version 9.2.1. May work with newer versions
MySQL5InnoDB Support for the MySQL database, version 5.x preferring the InnoDB storage engine when
exporting tables.
MySQL57InnoDB Support for the MySQL database, version 5.7 preferring the InnoDB storage engine when
exporting tables. May work with newer versions
MariaDB Support for the MariaDB database. May work with newer versions
MariaDB53 Support for the MariaDB database, version 5.3 and newer.
PostgreSQL9 Support for the PostgreSQL database, version 9. May work with later versions.
Progress Support for the Progress database, version 9.1C. May work with newer versions.
SybaseASE15 Support for the Sybase Adaptive Server Enterprise database, version 15
SybaseASE157 Support for the Sybase Adaptive Server Enterprise database, version 15.7. May work with
newer versions.
TimesTen Support for the TimesTen database, version 5.1. May work with newer versions
It is important to understand that the term transaction has many different yet related meanings in regards to persistence and
Object/Relational Mapping. In most use-cases these definitions align, but that is not always the case.
Might refer to the application notion of a Unit-of-Work, as defined by the archetypal pattern.
This documentation largely treats the physical and logic notions of a transaction as one-in-the-same.
jta
If a JPA application does not provide a setting for hibernate.transaction.coordinator_class , Hibernate will automatically
build the proper transaction coordinator based on the transaction type for the persistence unit.
If a non-JPA application does not provide a setting for hibernate.transaction.coordinator_class , Hibernate will use jdbc
as the default. This default will cause problems if the application actually uses JTA-based transactions. A non-JPA application that
uses JTA-based transactions should explicitly set hibernate.transaction.coordinator_class=jta or provide a custom
org.hibernate.resource.transaction.TransactionCoordinatorBuilder that builds a
org.hibernate.resource.transaction.TransactionCoordinator that properly coordinates with JTA-based transactions.
Hibernate uses JDBC connections and JTA resources directly, without adding any additional locking behavior. Hibernate does not
lock objects in memory. The behavior defined by the isolation level of your database transactions does not change when you use
Hibernate. The Hibernate Session acts as a transaction-scoped cache providing repeatable reads for lookup by identifier and
queries that result in loading entities.
To reduce lock contention in the database, the physical database transaction needs to be as short as
possible. Long-running database transactions prevent your application from scaling to a highly-
concurrent load. Do not hold a database transaction open during end-user-level work, but open it
after the end-user-level work is finished. This concept is referred to as transactional write-behind .
Generally, JtaPlatform will need access to JNDI to resolve the JTA TransactionManager ,
UserTransaction , etc. See JNDI chapter for details on configuring access to JNDI.
Hibernate tries to discover the JtaPlatform it should use through the use of another service named
org.hibernate.engine.transaction.jta.platform.spi.JtaPlatformResolver . If that resolution does not work, or if you
wish to provide a custom implementation you will need to specify the hibernate.transaction.jta.platform setting.
Hibernate provides many implementations of the JtaPlatform contract, all with short names:
Borland
Bitronix
JBossAS
JtaPlatform for Arjuna/JBossTransactions/Narayana when used within the JBoss/WildFly Application Server.
JBossTS
JOnAS
JOTM
JRun4
OC4J
Orion
Resin
SapNetWeaver
SunOne
Weblogic
WebSphere
WebSphereExtended
To use this API, you would obtain the org.hibernate.Transaction from the Session. Transaction allows for all the normal
operations you’d expect: begin , commit and rollback , and it even exposes some cool methods like:
markRollbackOnly
registerSynchronization
that allows you to register JTA Synchronizations even in non-JTA environments. In fact, in both JTA and JDBC environments,
these Synchronizations are kept locally by Hibernate. In JTA environments, Hibernate will only ever register one single
Synchronization with the TransactionManager to avoid ordering problems.
Let’s take a look at using the Transaction API in the various environments.
JAVA
StandardServiceRegistry serviceRegistry = new StandardServiceRegistryBuilder ()
// "jdbc" is the default, but for explicitness
.applySetting( AvailableSettings .TRANSACTION_COORDINATOR_STRATEGY, "jdbc" )
.build();
Number customerCount = (Number ) session.createQuery( "select count(c) from Customer c" ).uniqueResult();
In the CMT case, we really could have omitted all of the Transaction calls. But the point of the examples was to show that the
Transaction API really does insulate your code from the underlying transaction mechanism. In fact, if you strip away the
comments and the single configuration setting supplied at bootstrap, the code is exactly the same in all 3 examples. In other
words, we could develop that code and drop it, as-is, in any of the 3 transaction environments.
The Transaction API tries hard to make the experience consistent across all environments. To that end, it generally defers to the
JTA specification when there are differences (for example automatically trying rollback on a failed commit).
home-grown ThreadLocal -based contextual sessions, helper classes such as HibernateUtil , or utilized third-party
frameworks, such as Spring or Pico, which provided proxy/interception-based contextual sessions.
Starting with version 3.0.1, Hibernate added the SessionFactory.getCurrentSession() method. Initially, this assumed usage
of JTA transactions, where the JTA transaction defined both the scope and context of a current session. Given the maturity of
the numerous stand-alone JTA TransactionManager implementations, most, if not all, applications should be using JTA
transaction management, whether or not they are deployed into a J2EE container. Based on that, the JTA -based contextual
sessions are all you need to use.
However, as of version 3.1, the processing behind SessionFactory.getCurrentSession() is now pluggable. To that end, a new
extension interface, org.hibernate.context.spi.CurrentSessionContext , and a new configuration parameter,
hibernate.current_session_context_class , have been added to allow pluggability of the scope and context of defining
current sessions.
org.hibernate.context.internal.JTASessionContext
current sessions are tracked and scoped by a JTA transaction. The processing here is exactly the same as in the older JTA-only
approach. See the Javadocs (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/context/internal/JTASessionContext.html) for
more details.
org.hibernate.context.internal.ThreadLocalSessionContext
org.hibernate.context.internal.ManagedSessionContext
current sessions are tracked by thread of execution. However, you are responsible to bind and unbind a Session instance
with static methods on this class; it does not open, flush, or close a Session . See the Javadocs
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/context/internal/ManagedSessionContext.html) for details.
Typically, the value of this parameter would just name the implementation class to use. For the three out-of-the-box
implementations, however, there are three corresponding short names: jta, thread, and managed.
The first two implementations provide a one session - one database transaction programming model. This is also known and used
as session-per-request. The beginning and end of a Hibernate session is defined by the duration of a database transaction. If you
use programmatic transaction demarcation in plain Java SE without JTA, you are advised to use the Hibernate Transaction API
to hide the underlying transaction system from your code. If you use JTA, you can utilize the JTA interfaces to demarcate
transactions. If you execute in an EJB container that supports CMT, transaction boundaries are defined declaratively and you do
not need any transaction or session demarcation operations in your code. Refer to Transactions and concurrency control for
more information and code examples.
Using auto-commit does not circumvent database transactions. Instead, when in auto-commit mode,
JDBC drivers simply perform each call in an implicit transaction call. It is as if your application called
commit after each and every JDBC call.
Within this pattern, there is a common technique of defining a current session to simplify the need of passing this Session
around to all the application components that may need access to it. Hibernate provides support for this technique through the
getCurrentSession method of the SessionFactory . The concept of a current session has to have a scope that defines the
bounds in which the notion of current is valid. This is the purpose of the
org.hibernate.context.spi.CurrentSessionContext contract.
First is a JTA transaction because it allows a callback hook to know when it is ending, which gives Hibernate a chance to close
the Session and clean up. This is represented by the org.hibernate.context.internal.JTASessionContext
implementation of the org.hibernate.context.spi.CurrentSessionContext contract. Using this implementation, a
Session will be opened the first time getCurrentSession is called within that transaction.
Secondly is this application request cycle itself. This is best represented with the
org.hibernate.context.internal.ManagedSessionContext implementation of the
org.hibernate.context.spi.CurrentSessionContext contract. Here an external component is responsible for managing
the lifecycle and scoping of a current session. At the start of such a scope, ManagedSessionContext#bind() method is called
passing in the Session . In the end, its unbind() method is called. Some common examples of such external components
include:
javax.servlet.Filter implementation
A proxy/interception container
The getCurrentSession() method has one downside in a JTA environment. If you use it, after_statement
connection release mode is also used by default. Due to a limitation of the JTA specification, Hibernate cannot
automatically clean up any unclosed ScrollableResults or Iterator instances returned by scroll() or
iterate() . Release the underlying database cursor by calling ScrollableResults#close() or
Hibernate.close(Iterator) explicitly from a finally block.
The first screen of a dialog opens. The data seen by the user is loaded in a particular Session and database transaction. The user
is free to modify the objects.
The user uses a UI element to save their work after five minutes of editing. The modifications are made persistent. The user also
expects to have exclusive access to the data during the edit session.
Even though we have multiple databases access here, from the point of view of the user, this series of steps represents a single
unit of work. There are many ways to implement this in your application.
A first naive implementation might keep the Session and database transaction open while the user is editing, using database-
level locks to prevent other users from modifying the same data and to guarantee isolation and atomicity. This is an anti-pattern
because lock contention is a bottleneck which will prevent scalability in the future.
Several database transactions are used to implement the conversation. In this case, maintaining isolation of business processes
becomes the partial responsibility of the application tier. A single conversation usually spans several database transactions. These
multiple database accesses can only be atomic as a whole if only one of these database transactions (typically the last one) stores
the updated data. All others only read data. A common way to receive this data is through a wizard-style dialog spanning several
request/response cycles. Hibernate includes some features which make this easy to implement.
Detached Objects If you decide to use the session-per-request pattern, all loaded
instances will be in the detached state during user think time.
Hibernate allows you to reattach the objects and persist the
modifications. The pattern is called session-per-request-with-
detached-objects. Automatic versioning is used to isolate
concurrent modifications.
The session-per-application is also considered an anti-pattern. The Hibernate Session , like the JPA EntityManager , is not a
thread-safe object and it is intended to be confined to a single thread at once. If the Session is shared among multiple threads,
there will be race conditions as well as visibility issues, so beware of this.
An exception thrown by Hibernate means you have to rollback your database transaction and close the Session immediately. If
your Session is bound to the application, you have to stop the application. Rolling back the database transaction does not put
your business objects back into the state they were at the start of the transaction. This means that the database state and the
business objects will be out of sync. Usually, this is not a problem because exceptions are not recoverable and you will have to
start over after rollback anyway.
The Session caches every object that is in a persistent state (watched and checked for dirty state by Hibernate). If you keep it
open for a long time or simply load too much data, it will grow endlessly until you get an OutOfMemoryException . One solution
is to call clear() and evict() to manage the Session cache, but you should consider a Stored Procedure if you need mass
data operations. Some solutions are shown in the Batching chapter. Keeping a Session open for the duration of a user session
also means a higher probability of stale data.
9. JNDI
Hibernate does optionally interact with JNDI on the application’s behalf. Generally, it does this when the application:
is using JTA transactions and the JtaPlatform needs to do JNDI lookups for TransactionManager , UserTransaction , etc
All of these JNDI calls route through a single service whose role is org.hibernate.engine.jndi.spi.JndiService . The
standard JndiService accepts a number of configuration settings
hibernate.jndi.class
hibernate.jndi.url
Any other settings prefixed with hibernate.jndi. will be collected and passed along to the JNDI provider.
The standard JndiService assumes that all JNDI calls are relative to the same InitialContext . If
your application uses multiple naming servers for whatever reason, you will need a custom
JndiService implementation to handle those details.
10. Locking
In a relational database, locking refers to actions taken to prevent data from changing between the time it is read and the time is
used.
Optimistic
Optimistic locking (http://en.wikipedia.org/wiki/Optimistic_locking) assumes that multiple transactions can complete without
affecting each other, and that therefore transactions can proceed without locking the data resources that they affect. Before
committing, each transaction verifies that no other transaction has modified its data. If the check reveals conflicting
modifications, the committing transaction rolls back.
Pessimistic
Pessimistic locking assumes that concurrent transactions will conflict with each other, and requires resources to be locked after
they are read and only unlocked after the application has finished using the data.
Hibernate provides mechanisms for implementing both types of locking in your applications.
10.1. Optimistic
When your application uses long transactions or conversations that span several database transactions, you can store versioning
data so that if the same entity is updated by two conversations, the last to commit changes is informed of the conflict, and does
not override the other conversation’s work. This approach guarantees some isolation, but scales well and works particularly well
in read-often-write-sometimes situations.
Hibernate provides two different mechanisms for storing versioning information, a dedicated version number or a timestamp.
A version or timestamp property can never be null for a detached instance. Hibernate detects any
instance with a null version or timestamp as transient, regardless of other unsaved-value strategies
that you specify. Declaring a nullable version or timestamp property is an easy way to avoid problems
with transitive reattachment in Hibernate, especially useful if you use assigned identifiers or composite keys.
JPA defines support for optimistic locking based on either a version (sequential numeric) or timestamp strategy. To enable this
style of optimistic locking simply add the javax.persistence.Version to the persistent attribute that defines the optimistic
locking value. According to JPA, the valid types for these attributes are limited to:
int or Integer
short or Short
long or Long
java.sql.Timestamp
However, Hibernate allows you to use even Java 8 Date/Time types, such as Instant .
JAVA
@Entity(name = "Person")
public static class Person {
@Id
@GeneratedValue
private Long id;
@Column(name = "`name`")
private String name;
@Version
private long version;
JAVA
@Entity(name = "Person")
public static class Person {
@Id
@GeneratedValue
private Long id;
@Column(name = "`name`")
private String name;
@Version
private Timestamp version;
@Entity(name = "Person")
public static class Person {
@Id
@GeneratedValue
private Long id;
@Column(name = "`name`")
private String name;
@Version
private Instant version;
JAVA
@Version
private long version;
Here, the version property is mapped to the version column, and the entity manager uses it to detect conflicting updates, and
prevent the loss of updates that would otherwise be overwritten by a last-commit-wins strategy.
The version column can be any kind of type, as long as you define and implement the appropriate UserVersionType .
Your application is forbidden from altering the version number set by Hibernate. To artificially increase the version number, see
the documentation for properties LockModeType.OPTIMISTIC_FORCE_INCREMENT or
LockModeType.PESSIMISTIC_FORCE_INCREMENT check in the Hibernate Entity Manager reference documentation.
If the version number is generated by the database, such as a trigger, use the annotation
@org.hibernate.annotations.Generated(GenerationTime.ALWAYS) on the version attribute.
Timestamp
Timestamps are a less reliable way of optimistic locking than version numbers but can be used by applications for other purposes
as well. Timestamping is automatically used if you the @Version annotation on a Date or Calendar property type.
JAVA
@Version
private Date version;
Hibernate can retrieve the timestamp value from the database or the JVM, by reading the value you specify for the
@org.hibernate.annotations.Source annotation. The value can be either org.hibernate.annotations.SourceType.DB or
org.hibernate.annotations.SourceType.VM . The default behavior is to use the database and is also used if you don’t specify
the annotation at all.
The timestamp can also be generated by the database instead of Hibernate if you use the
@org.hibernate.annotations.Generated(GenerationTime.ALWAYS) or the @Source annotation.
JAVA
@Entity(name = "Person")
public static class Person {
@Id
private Long id;
@Version
@Source(value = SourceType .DB)
private Date version;
}
Now, when persisting a Person entity, Hibernate calls the database-specific current timestamp retrieval function:
JAVA
Person person = new Person ();
person.setId( 1L );
person.setFirstName( "John" );
person.setLastName( "Doe" );
assertNull( person.getVersion() );
entityManager.persist( person );
assertNotNull( person.getVersion() );
SQL
CALL current_timestamp()
INSERT INTO
Person
(firstName, lastName, version, id)
VALUES
(?, ?, ?, ?)
Excluding attributes
By default, every entity attribute modification is going to trigger a version incrementation. If there is an entity property which
should not bump up the entity version, then you need to annotate it with the Hibernate @OptimisticLock
JAVA
@Entity(name = "Phone")
public static class Phone {
@Id
private Long id;
@Column(name = "`number`")
private String number;
@Version
private Long version;
This way, if one thread modifies the Phone number while a second thread increments the callCount attribute, the two
concurrent transactions are not going to conflict as illustrated by the following example.
JAVA
doInJPA( this::entityManagerFactory, entityManager -> {
Phone phone = entityManager.find( Phone .class , 1L );
phone.setNumber( "+123-456-7890" );
update
Phone
set
callCount = 1,
"number" = '123-456-7890',
version = 0
where
id = 1
and version = 0
update
Phone
set
callCount = 0,
"number" = '+123-456-7890',
version = 1
where
id = 1
and version = 0
When Bob changes the Phone entity callCount , the entity version is not bumped up. That’s why Alice’s UPDATE succeeds since
the entity version is still 0, even if Bob has changed the record since Alice loaded it.
Although there is no conflict between Bob and Alice, Alice’s UPDATE overrides Bob’s change to the
callCount attribute.
For this reason, you should only use this feature if you can accommodate lost updates on the excluded
entity properties.
Hibernate supports a form of optimistic locking that does not require a dedicated "version attribute". This is also useful for use
with modeling legacy schemas.
The idea is that you can get Hibernate to perform "version checks" using either all of the entity’s attributes or just the attributes
that have changed. This is achieved through the use of the @OptimisticLocking
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/OptimisticLocking.html) annotation which defines a single
attribute of type org.hibernate.annotations.OptimisticLockType
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/OptimisticLockType.html).
NONE
ALL
performs optimistic locking based on all fields as part of an expanded WHERE clause restriction for the UPDATE/DELETE SQL
statements
DIRTY
performs optimistic locking based on dirty fields as part of an expanded WHERE clause restriction for the UPDATE/DELETE SQL
statements
JAVA
@Entity(name = "Person")
@OptimisticLocking(type = OptimisticLockType .ALL)
@DynamicUpdate
public static class Person {
@Id
private Long id;
@Column(name = "`name`")
private String name;
@Column(name = "created_on")
private Timestamp createdOn;
JAVA
Person person = entityManager.find( Person .class , 1L );
person.setCity( "Washington D.C." );
UPDATE
Person
SET
city=?
WHERE
id=?
AND city=?
AND country=?
AND created_on=?
AND "name"=?
As you can see, all the columns of the associated database row are used in the WHERE clause. If any column has changed after the
row was loaded, there won’t be any match, and a StaleStateException or an OptimisticLockException is going to be
thrown.
When using OptimisticLockType.ALL , you should also use @DynamicUpdate because the UPDATE
statement must take into consideration all the entity property values.
JAVA
@Entity(name = "Person")
@OptimisticLocking(type = OptimisticLockType .DIRTY)
@DynamicUpdate
@SelectBeforeUpdate
public static class Person {
@Id
private Long id;
@Column(name = "`name`")
private String name;
@Column(name = "created_on")
private Timestamp createdOn;
JAVA
Person person = entityManager.find( Person .class , 1L );
person.setCity( "Washington D.C." );
SQL
UPDATE
Person
SET
city=?
WHERE
id=?
and city=?
This time, only the database column that has changed was used in the WHERE clause.
When using OptimisticLockType.DIRTY , you should also use @DynamicUpdate because the UPDATE statement
must take into consideration all the dirty entity property values, and also the @SelectBeforeUpdate annotation
so that detached entities are properly handled by the Session#update(entity)
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/Session.html#update-java.lang.Object-) operation.
10.2. Pessimistic
Typically, you only need to specify an isolation level for the JDBC connections and let the database handle locking issues. If you do
need to obtain exclusive pessimistic locks or re-obtain locks at the start of a new transaction, Hibernate gives you the tools you
need.
Hibernate always uses the locking mechanism of the database, and never lock objects in memory.
The explicit user request mentioned above occurs as a consequence of any of the following actions:
a call to Session.lock() .
a call to Query.setLockMode() .
If you call Session.load() with option UPGRADE , UPGRADE_NOWAIT or UPGRADE_SKIPLOCKED , and the requested object is not
already loaded by the session, the object is loaded using SELECT … FOR UPDATE .
If you call load() for an object that is already loaded with a less restrictive lock than the one you request, Hibernate calls
lock() for that object.
Session.lock( ) performs a version number check if the specified lock mode is READ , UPGRADE , UPGRADE_NOWAIT or
UPGRADE_SKIPLOCKED . In the case of UPGRADE , UPGRADE_NOWAIT or UPGRADE_SKIPLOCKED , the SELECT … FOR UPDATE syntax is
used.
If the requested lock mode is not supported by the database, Hibernate uses an appropriate alternate mode instead of throwing
an exception. This ensures that applications are portable.
javax.persistence.lock.timeout
it gives the number of milliseconds a lock acquisition request will wait before throwing an exception
javax.persistence.lock.scope
defines the scope (http://docs.oracle.com/javaee/7/api/javax/persistence/PessimisticLockScope.html) of the lock acquisition request. The
scope can either be NORMAL (default value) or EXTENDED . The EXTENDED scope will cause a lock acquisition request to be
passed to other owned table structured (e.g. @Inheritance(strategy=InheritanceType.JOINED) , @ElementCollection )
JAVA
entityManager.find(
Person .class , id, LockModeType .PESSIMISTIC_WRITE,
Collections .singletonMap( "javax.persistence.lock.timeout", 200 )
);
SQL
SELECT explicitlo0_.id AS id1_0_0_,
explicitlo0_."name" AS name2_0_0_
FROM person explicitlo0_
WHERE explicitlo0_.id = 1
FOR UPDATE wait 2
Not all JDBC database drivers support setting a timeout value for a locking request. If not supported,
the Hibernate dialect ignores this query hint.
The following example shows how to obtain a shared database lock without waiting for the lock acquisition request.
SQL
SELECT p.id AS id1_0_0_ ,
p.name AS name2_0_0_
FROM Person p
WHERE p.id = 1
SELECT id
FROM Person
WHERE id = 1
FOR SHARE NOWAIT
10.6. Follow-on-locking
When using Oracle, the FOR UPDATE exclusive locking clause
(https://docs.oracle.com/database/121/SQLRF/statements_10002.htm#SQLRF55371) cannot be used with:
DISTINCT
GROUP BY
UNION
inlined views (derived tables), therefore, affecting the legacy Oracle pagination mechanism as well.
For this reason, Hibernate uses secondary selects to lock the previously fetched entities.
JAVA
List<Person > persons = entityManager.createQuery(
"select DISTINCT p from Person p", Person .class )
.setLockMode( LockModeType .PESSIMISTIC_WRITE )
.getResultList();
SQL
SELECT DISTINCT p.id as id1_0_, p."name" as name2_0_
FROM Person p
SELECT id
FROM Person
WHERE id = 1 FOR UPDATE
SELECT id
FROM Person
WHERE id = 1 FOR UPDATE
To avoid the N+1 query problem, a separate query can be used to apply the lock using the associated
entity identifiers.
JAVA
List<Person > persons = entityManager.createQuery(
"select DISTINCT p from Person p", Person .class )
.getResultList();
entityManager.createQuery(
"select p.id from Person p where p in :persons")
.setLockMode( LockModeType .PESSIMISTIC_WRITE )
.setParameter( "persons", persons )
.getResultList();
SQL
SELECT DISTINCT p.id as id1_0_, p."name" as name2_0_
FROM Person p
The lock request was moved from the original query to a secondary one which takes the previously fetched entities to lock their
associated database records.
Prior to Hibernate 5.2.1, the follow-on-locking mechanism was applied uniformly to any locking query executing on Oracle. Since
5.2.1, the Oracle Dialect tries to figure out if the current query demands the follow-on-locking mechanism.
Even more important is that you can overrule the default follow-on-locking detection logic and explicitly enable or disable it on a
per query basis.
JAVA
List<Person > persons = entityManager.createQuery(
"select p from Person p", Person .class )
.setMaxResults( 10 )
.unwrap( Query .class )
.setLockOptions(
new LockOptions ( LockMode .PESSIMISTIC_WRITE )
.setFollowOnLocking( false ) )
.getResultList();
SQL
SELECT *
FROM (
SELECT p.id as id1_0_, p."name" as name2_0_
FROM Person p
)
WHERE rownum <= 10
FOR UPDATE
The follow-on-locking mechanism should be explicitly enabled only if the currently executing query fails
because the FOR UPDATE clause cannot be applied, meaning that the Dialect resolving mechanism needs to be
further improved.
11. Fetching
Fetching, essentially, is the process of grabbing data from the database and making it available to the application. Tuning how an
application does fetching is one of the biggest factors in determining how an application will perform. Fetching too much data, in
terms of width (values/columns) and/or depth (results/rows), adds unnecessary overhead in terms of both JDBC communication
and ResultSet processing. Fetching too little data might cause additional fetching to be needed. Tuning how an application fetches
data presents a great opportunity to influence the overall application performance.
"now" is generally termed eager or immediate. "later" is generally termed lazy or delayed.
static
Static definition of fetching strategies is done in the mappings. The statically-defined fetch strategies are used in the absence of
any dynamically defined strategies
SELECT
Performs a separate SQL select to load the data. This can either be EAGER (the second select is issued immediately) or LAZY
(the second select is delayed until the data is needed). This is the strategy generally termed N+1.
JOIN
Inherently an EAGER style of fetching. The data to be fetched is obtained through the use of an SQL outer join.
BATCH
Performs a separate SQL select to load a number of related data items using an IN-restriction as part of the SQL WHERE-
clause based on a batch size. Again, this can either be EAGER (the second select is issued immediately) or LAZY (the second
select is delayed until the data is needed).
SUBSELECT
Performs a separate SQL select to load associated data based on the SQL restriction used to load the owner. Again, this can
either be EAGER (the second select is issued immediately) or LAZY (the second select is delayed until the data is needed).
fetch profiles
defined in mappings, but can be enabled/disabled on the Session .
HQL/JPQL
and both Hibernate and JPA Criteria queries have the ability to specify fetching, specific to said query.
entity graphs
Starting in Hibernate 4.2 (JPA 2.1) this is also an option.
JAVA
@Entity(name = "Department")
public static class Department {
@Id
private Long id;
@Entity(name = "Employee")
public static class Employee {
@Id
private Long id;
@NaturalId
private String username;
The Employee entity has a @ManyToOne association to a Department which is fetched eagerly.
When issuing a direct entity fetch, Hibernate executed the following SQL query:
JAVA
Employee employee = entityManager.find( Employee .class , 1L );
select
e.id as id1_1_0_,
e.department_id as departme3_1_0_,
e.username as username2_1_0_,
d.id as id1_0_1_
from
Employee e
left outer join
Department d
on e.department_id=d.id
where
e.id = 1
The LEFT JOIN clause is added to the generated SQL query because this association is required to be fetched eagerly.
On the other hand, if you are using an entity query that does not contain a JOIN FETCH directive to the Department association:
JAVA
Employee employee = entityManager.createQuery(
"select e " +
"from Employee e " +
"where e.id = :id", Employee .class )
.setParameter( "id", 1L )
.getSingleResult();
SQL
select
e.id as id1_1_,
e.department_id as departme3_1_,
e.username as username2_1_
from
Employee e
where
e.id = 1
select
d.id as id1_0_0_
from
Department d
where
d.id = 1
Hibernate uses a secondary select instead. This is because the entity query fetch policy cannot be overridden, so Hibernate
requires a secondary select to ensure that the EAGER association is fetched prior to returning the result to the user.
If you forget to JOIN FETCH all EAGER associations, Hibernate is going to issue a secondary select for
each and every one of those which, in turn, can lean to N+1 query issues.
JAVA
@Entity(name = "Department")
public static class Department {
@Id
private Long id;
@OneToMany(mappedBy = "department")
private List<Employee > employees = new ArrayList <>();
@Entity(name = "Employee")
public static class Employee {
@Id
private Long id;
@NaturalId
private String username;
@Column(name = "pswd")
@ColumnTransformer(
read = "decrypt( 'AES', '00', pswd )",
write = "encrypt('AES', '00', ?)"
)
private String password;
@ManyToMany(mappedBy = "employees")
private List<Project > projects = new ArrayList <>();
@Entity(name = "Project")
public class Project {
@Id
private Long id;
@ManyToMany
private List<Employee > employees = new ArrayList <>();
The Hibernate recommendation is to statically mark all associations lazy and to use dynamic fetching
strategies for eagerness. This is unfortunately at odds with the JPA specification which defines that all
one-to-one and many-to-one associations should be eagerly fetched by default. Hibernate, as a JPA
provider, honors that default.
11.4. No fetching
For the first use case, consider the application login process for an Employee . Let’s assume that login only requires access to the
Employee information, not Project nor Department information.
JAVA
Employee employee = entityManager.createQuery(
"select e " +
"from Employee e " +
"where " +
" e.username = :username and " +
" e.password = :password",
Employee .class )
.setParameter( "username", username)
.setParameter( "password", password)
.getSingleResult();
In this example, the application gets the Employee data. However, because all associations from Employee are declared as LAZY
(JPA defines the default for collections as LAZY) no other data is fetched.
If the login process does not need access to the Employee information specifically, another fetching optimization here would be
to limit the width of the query results.
JAVA
Integer accessLevel = entityManager.createQuery(
"select e.accessLevel " +
"from Employee e " +
"where " +
" e.username = :username and " +
" e.password = :password",
Integer .class )
.setParameter( "username", username)
.setParameter( "password", password)
.getSingleResult();
JAVA
Employee employee = entityManager.createQuery(
"select e " +
"from Employee e " +
"left join fetch e.projects " +
"where " +
" e.username = :username and " +
" e.password = :password",
Employee .class )
.setParameter( "username", username)
.setParameter( "password", password)
.getSingleResult();
JAVA
CriteriaBuilder builder = entityManager.getCriteriaBuilder();
CriteriaQuery <Employee > query = builder.createQuery( Employee .class );
Root<Employee > root = query.from( Employee .class );
root.fetch( "projects", JoinType .LEFT);
query.select (root).where (
builder.and(
builder.equal(root.get("username"), username),
builder.equal(root.get("password"), password)
)
);
Employee employee = entityManager.createQuery( query ).getSingleResult();
In this example we have an Employee and their Projects loaded in a single query shown both as an HQL query and a JPA
Criteria query. In both cases, this resolves to exactly one database query to get all that information.
JAVA
@Entity(name = "Employee")
@NamedEntityGraph(name = "employee.projects",
attributeNodes = @NamedAttributeNode("projects")
)
JAVA
Employee employee = entityManager.find(
Employee .class ,
userId,
Collections .singletonMap(
"javax.persistence.fetchgraph",
entityManager.getEntityGraph( "employee.projects" )
)
);
Although the JPA standard specifies that you can override an EAGER fetching association at runtime
using the javax.persistence.fetchgraph hint, currently, Hibernate does not implement this feature,
so EAGER associations cannot be fetched lazily. For more info, check out the HHH-8776
(https://hibernate.atlassian.net/browse/HHH-8776) Jira issue.
When executing a JPQL query, if an EAGER association is omitted, Hibernate will issue a secondary select for
every association needed to be fetched eagerly, which can lead dto N+1 query issues.
For this reason, it’s better to use LAZY associations, and only fetch them eagerly on a per-query basis.
If we have a Project parent entity which has an employees child associations, and we’d like to fetch the department for the
Employee child association.
JAVA
@Entity(name = "Project")
@NamedEntityGraph(name = "project.employees",
attributeNodes = @NamedAttributeNode(
value = "employees",
subgraph = "project.employees.department"
),
subgraphs = @NamedSubgraph(
name = "project.employees.department",
attributeNodes = @NamedAttributeNode( "department" )
)
)
public static class Project {
@Id
private Long id;
@ManyToMany
private List<Employee > employees = new ArrayList <>();
When fetching this entity graph, Hibernate generates the following SQL query:
JAVA
Project project = doInJPA( this::entityManagerFactory, entityManager -> {
return entityManager.find(
Project .class ,
1L,
Collections .singletonMap(
"javax.persistence.fetchgraph",
entityManager.getEntityGraph( "project.employees" )
)
);
} );
select
p.id as id1_2_0_, e.id as id1_1_1_, d.id as id1_0_2_,
e.accessLevel as accessLe2_1_1_,
e.department_id as departme5_1_1_,
decrypt( 'AES', '00', e.pswd ) as pswd3_1_1_,
e.username as username4_1_1_,
p_e.projects_id as projects1_3_0__,
p_e.employees_id as employee2_3_0__
from
Project p
inner join
Project_Employee p_e
on p.id=p_e.projects_id
inner join
Employee e
on p_e.employees_id=e.id
inner join
Department d
on e.department_id=d.id
where
p.id = ?
JAVA
@Entity(name = "Employee")
@FetchProfile(
name = "employee.projects",
fetchOverrides = {
@FetchProfile.FetchOverride (
entity = Employee .class ,
association = "projects",
mode = FetchMode .JOIN
)
}
)
JAVA
session.enableFetchProfile( "employee.projects" );
Employee employee = session.bySimpleNaturalId( Employee .class ).load( username );
Here the Employee is obtained by natural-id lookup and the Employee’s Project data is fetched eagerly. If the Employee data is
resolved from cache, the Project data is resolved on its own. However, if the Employee data is not resolved in cache, the
Employee and Project data is resolved in one SQL query via join as we saw above.
JAVA
@Entity(name = "Department")
public static class Department {
@Id
private Long id;
@OneToMany(mappedBy = "department")
//@BatchSize(size = 5)
private List<Employee > employees = new ArrayList <>();
@Entity(name = "Employee")
public static class Employee {
@Id
private Long id;
@NaturalId
private String name;
Considering that we have previously fetched several Department entities, and now we need to initialize the employees entity
collection for each particular Department , the @BatchSize annotations allows us to load multiple Employee entities in a single
database roundtrip.
JAVA
List<Department > departments = entityManager.createQuery(
"select d " +
"from Department d " +
"inner join d.employees e " +
"where e.name like 'John%'", Department .class )
.getResultList();
SELECT
d.id as id1_0_
FROM
Department d
INNER JOIN
Employee employees1_
ON d.id=employees1_.department_id
SELECT
e.department_id as departme3_1_1_,
e.id as id1_1_1_,
e.id as id1_1_0_,
e.department_id as departme3_1_0_,
e.name as name2_1_0_
FROM
Employee e
WHERE
e.department_id IN (
0, 2, 3, 4, 5
)
SELECT
e.department_id as departme3_1_1_,
e.id as id1_1_1_,
e.id as id1_1_0_,
e.department_id as departme3_1_0_,
e.name as name2_1_0_
FROM
Employee e
WHERE
e.department_id IN (
6, 7, 8, 9, 1
)
As you can see in the example above, there are only two SQL statements used to fetch the Employee entities associated with
multiple Department entities.
Without @BatchSize , you’d run into a N+1 query issue, so, instead of 2 SQL statements, there would
be 10 queries needed for fetching the Employee child entities.
However, although @BatchSize is better than running into an N+1 query issue, most of the time, a
DTO projection or a JOIN FETCH is a much better alternative since it allows you to fetch all the required data
with a single query.
SELECT
The association is going to be fetched lazily using a secondary select for each individual entity, collection, or join load. It’s
equivalent to JPA FetchType.LAZY fetching strategy.
JOIN
Use an outer join to load the related entities, collections or joins when using direct fetching. It’s equivalent to JPA
FetchType.EAGER fetching strategy.
SUBSELECT
Available for collections only. When accessing a non-initialized collection, this fetch mode will trigger loading all elements of
all collections of the same role for all owners associated with the persistence context using a single secondary select.
11.10. FetchMode.SELECT
To demonstrate how FetchMode.SELECT works, consider the following entity mapping:
JAVA
@Entity(name = "Department")
public static class Department {
@Id
private Long id;
@Entity(name = "Employee")
public static class Employee {
@Id
@GeneratedValue
private Long id;
@NaturalId
private String username;
Considering there are multiple Department entities, each one having multiple Employee entities, when executing the following
test case, Hibernate fetches every uninitialized Employee collection using a secondary SELECT statement upon accessing the
child collection for the first time:
SQL
SELECT
d.id as id1_0_
FROM
Department d
-- Fetched 2 Departments
SELECT
e.department_id as departme3_1_0_,
e.id as id1_1_0_,
e.id as id1_1_1_,
e.department_id as departme3_1_1_,
e.username as username2_1_1_
FROM
Employee e
WHERE
e.department_id = 1
SELECT
e.department_id as departme3_1_0_,
e.id as id1_1_0_,
e.id as id1_1_1_,
e.department_id as departme3_1_1_,
e.username as username2_1_1_
FROM
Employee e
WHERE
e.department_id = 2
The more Department entities are fetched by the first query, the more secondary SELECT statements are executed to initialize
the employees collections. Therefore, FetchMode.SELECT can lead to N+1 query issues.
11.11. FetchMode.SUBSELECT
To demonstrate how FetchMode.SUBSELECT works, we are going to modify the FetchMode.SELECT mapping example to use
FetchMode.SUBSELECT :
JAVA
@OneToMany(mappedBy = "department", fetch = FetchType .LAZY)
@Fetch(FetchMode .SUBSELECT)
private List<Employee > employees = new ArrayList <>();
Now, we are going to fetch all Department entities that match a given filtering predicate and then navigate their employees
collections.
Hibernate is going to avoid the N+1 query issue by generating a single SQL statement to initialize all employees collections for all
Department entities that were previously fetched. Instead of using passing all entity identifiers, Hibernate simply reruns the
previous query that fetched the Department entities.
JAVA
List<Department > departments = entityManager.createQuery(
"select d " +
"from Department d " +
"where d.name like :token", Department .class )
.setParameter( "token", "Department%" )
.getResultList();
SQL
SELECT
d.id as id1_0_
FROM
Department d
where
d.name like 'Department%'
-- Fetched 2 Departments
SELECT
e.department_id as departme3_1_1_,
e.id as id1_1_1_,
e.id as id1_1_0_,
e.department_id as departme3_1_0_,
e.username as username2_1_0_
FROM
Employee e
WHERE
e.department_id in (
SELECT
fetchmodes0_.id
FROM
Department fetchmodes0_
WHERE
d.name like 'Department%'
)
11.12. FetchMode.JOIN
To demonstrate how FetchMode.JOIN works, we are going to modify the FetchMode.SELECT mapping example to use
FetchMode.JOIN instead:
JAVA
@OneToMany(mappedBy = "department")
@Fetch(FetchMode .JOIN)
private List<Employee > employees = new ArrayList <>();
Now, we are going to fetch one Department and navigate its employees collections.
The reason why we are not using a JPQL query to fetch multiple Department entities is because the
FetchMode.JOIN strategy would be overridden by the query fetching directive.
To fetch multiple relationships with a JPQL query, the JOIN FETCH directive must be used instead.
Therefore, FetchMode.JOIN is useful for when entities are fetched directly, via their identifier or natural-id.
Also, the FetchMode.JOIN acts as a FetchType.EAGER strategy. Even if we mark the association as
FetchType.LAZY , the FetchMode.JOIN will load the association eagerly.
Hibernate is going to avoid the secondary query by issuing an OUTER JOIN for the employees collection.
JAVA
Department department = entityManager.find( Department .class , 1L );
assertEquals( 3, department.getEmployees().size() );
SQL
SELECT
d.id as id1_0_0_,
e.department_id as departme3_1_1_,
e.id as id1_1_1_,
e.id as id1_1_2_,
e.department_id as departme3_1_2_,
e.username as username2_1_2_
FROM
Department d
LEFT OUTER JOIN
Employee e
on d.id = e.department_id
WHERE
d.id = 1
-- Fetched department: 1
This time, there was no secondary query because the child collection was loaded along with the parent entity.
11.13. @LazyCollection
The @LazyCollection (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/LazyCollection.html) annotation is used
to specify the lazy fetching behavior of a given collection. The possible values are given by the LazyCollectionOption
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/LazyCollectionOption.html) enumeration:
TRUE
FALSE
EXTRA
The TRUE and FALSE values are deprecated since you should be using the JPA FetchType
(http://docs.oracle.com/javaee/7/api/javax/persistence/FetchType.html) attribute of the @ElementCollection , @OneToMany , or
@ManyToMany collection.
The EXTRA value has no equivalent in the JPA specification, and it’s used to avoid loading the entire collection even when the
collection is accessed for the first time. Each element is fetched individually using a secondary query.
JAVA
@Entity(name = "Department")
public static class Department {
@Id
private Long id;
@Entity(name = "Employee")
public static class Employee {
@Id
private Long id;
@NaturalId
private String username;
LazyCollectionOption.EXTRA only works for ordered collections, either List(s) that are annotated with
@OrderColumn or Map(s).
For bags (e.g. regular List(s) of entities that do not preserve any certain ordering), the
@LazyCollection(LazyCollectionOption.EXTRA)` behaves like any other FetchType.LAZY collection (the collection
is fetched entirely upon being accessed for the first time).
JAVA
Department department = new Department ();
department.setId( 1L );
entityManager.persist( department );
When fetching the employee collection entries by their position in the List , Hibernate generates the following SQL statements:
JAVA
Department department = entityManager.find(Department .class , 1L);
SELECT
max(order_id) + 1
FROM
Employee
WHERE
department_id = ?
SELECT
e.id as id1_1_0_,
e.department_id as departme3_1_0_,
e.username as username2_1_0_
FROM
Employee e
WHERE
e.department_id=?
AND e.order_id=?
SELECT
e.id as id1_1_0_,
e.department_id as departme3_1_0_,
e.username as username2_1_0_
FROM
Employee e
WHERE
e.department_id=?
AND e.order_id=?
SELECT
e.id as id1_1_0_,
e.department_id as departme3_1_0_,
e.username as username2_1_0_
FROM
Employee e
WHERE
e.department_id=?
AND e.order_id=?
Therefore, the child entities were fetched one after the other without triggering a full collection
initialization.
For this reason, caution is advised because LazyCollectionOption.EXTRA lazy collections are prone to
N+1 query issues.
12. Batching
hibernate.jdbc.batch_size
Controls the maximum number of statements Hibernate will batch together before asking the driver to execute the batch. Zero
or a negative number disables this feature.
hibernate.jdbc.batch_versioned_data
Some JDBC drivers return incorrect row counts when a batch is executed. If your JDBC driver falls into this category this setting
should be set to false . Otherwise, it is safe to enable this which will allow Hibernate to still batch the DML for versioned
entities and still use the returned row counts for optimistic lock checks. Since 5.0, it defaults to true. Previously (versions 3.x
and 4.x), it used to be false.
hibernate.jdbc.batch.builder
Names the implementation class used to manage batching capabilities. It is almost never a good idea to switch from Hibernate’s
default implementation. But if you wish to, this setting would name the
org.hibernate.engine.jdbc.batch.spi.BatchBuilder implementation to use.
hibernate.order_updates
Forces Hibernate to order SQL updates by the entity type and the primary key value of the items being updated. This allows for
more batching to be used. It will also result in fewer transaction deadlocks in highly concurrent systems. Comes with a
performance hit, so benchmark before and after to see if this actually helps or hurts your application.
hibernate.order_inserts
Forces Hibernate to order inserts to allow for more batching to be used. Comes with a performance hit, so benchmark before
and after to see if this actually helps or hurts your application.
Since version 5.2, Hibernate allows overriding the global JDBC batch size given by the
hibernate.jdbc.batch_size configuration property for a given Session .
Example 413. Hibernate specific JDBC batch size configuration on a per Session basis
JAVA
entityManager
.unwrap( Session .class )
.setJdbcBatchSize( 10 );
Example 414. Naive way to insert 100 000 entities with Hibernate
JAVA
EntityManager entityManager = null;
EntityTransaction txn = null;
try {
entityManager = entityManagerFactory().createEntityManager();
txn = entityManager.getTransaction();
txn.begin ();
txn.commit();
} catch (RuntimeException e) {
if ( txn != null && txn.isActive()) txn.rollback();
throw e;
} finally {
if (entityManager != null) {
entityManager.close();
}
}
1. Hibernate caches all the newly inserted Customer instances in the session-level c1ache, so, when the transaction ends, 100
000 entities are managed by the persistence context. If the maximum memory allocated to the JVM is rather low, this example
could fail with an OutOfMemoryException . The Java 1.8 JVM allocated either 1/4 of available RAM or 1Gb, which can easily
accommodate 100 000 objects on the heap.
2. long-running transactions can deplete a connection pool so other transactions don’t get a chance to proceed.
3. JDBC batching is not enabled by default, so every insert statement requires a database roundtrip. To enable JDBC batching, set
the hibernate.jdbc.batch_size property to an integer between 10 and 50.
Hibernate disables insert batching at the JDBC level transparently if you use an identity identifier
generator.
txn = entityManager.getTransaction();
txn.begin ();
txn.commit();
} catch (RuntimeException e) {
if ( txn != null && txn.isActive()) txn.rollback();
throw e;
} finally {
if (entityManager != null) {
entityManager.close();
}
}
txn = entityManager.getTransaction();
txn.begin ();
scrollableResults = session
.createQuery( "select p from Person p" )
.setCacheMode( CacheMode .IGNORE )
.scroll( ScrollMode .FORWARD_ONLY );
int count = 0;
while ( scrollableResults.next() ) {
Person Person = (Person ) scrollableResults.get( 0 );
processPerson(Person );
if ( ++count % batchSize == 0 ) {
//flush a batch of updates and release memory:
entityManager.flush();
entityManager.clear();
}
}
txn.commit();
} catch (RuntimeException e) {
if ( txn != null && txn.isActive()) txn.rollback();
throw e;
} finally {
if (scrollableResults != null) {
scrollableResults.close();
}
if (entityManager != null) {
entityManager.close();
}
}
If left unclosed by the application, Hibernate will automatically close the underlying resources (e.g.
ResultSet and PreparedStatement ) used internally by the ScrollableResults when the current
transaction is ended (either commit or rollback).
12.2.3. StatelessSession
StatelessSession is a command-oriented API provided by Hibernate. Use it to stream data to and from the database in the
form of detached objects. A StatelessSession has no persistence context associated with it and does not provide many of the
higher-level lifecycle semantics.
a first-level cache
Limitations of StatelessSession :
Operations performed via a stateless session bypass Hibernate’s event model and interceptors.
Due to the lack of a first-level cache, Stateless sessions are vulnerable to data aliasing effects.
A stateless session is a lower-level abstraction that is much closer to the underlying JDBC.
JAVA
StatelessSession statelessSession = null;
Transaction txn = null;
ScrollableResults scrollableResults = null;
try {
SessionFactory sessionFactory = entityManagerFactory().unwrap( SessionFactory .class );
statelessSession = sessionFactory.openStatelessSession();
txn = statelessSession.getTransaction();
txn.begin ();
scrollableResults = statelessSession
.createQuery( "select p from Person p" )
.scroll(ScrollMode .FORWARD_ONLY);
while ( scrollableResults.next() ) {
Person Person = (Person ) scrollableResults.get( 0 );
processPerson(Person );
statelessSession.update( Person );
}
txn.commit();
} catch (RuntimeException e) {
if ( txn != null && txn.getStatus() == TransactionStatus .ACTIVE) txn.rollback();
throw e;
} finally {
if (scrollableResults != null) {
scrollableResults.close();
}
if (statelessSession != null) {
statelessSession.close();
}
}
The Customer instances returned by the query are immediately detached. They are never associated with any persistence
context.
The insert() , update() , and delete() operations defined by the StatelessSession interface operate directly on database
rows. They cause the corresponding SQL operations to be executed immediately. They have different semantics from the save() ,
Example 418. Psuedo-syntax for UPDATE and DELETE statements using HQL
JAVA
UPDATE FROM EntityName e WHERE e.name = ?
The FROM and WHERE clauses are each optional, but it’s good practice to use them.
The FROM clause can only refer to a single entity, which can be aliased. If the entity name is aliased, any property references must
be qualified using that alias. If the entity name is not aliased, then it is illegal for any property references to be qualified.
Joins, either implicit or explicit, are prohibited in a bulk HQL query. You can use sub-queries in the
WHERE clause, and the sub-queries themselves can contain joins.
JAVA
int updatedEntities = entityManager.createQuery(
"update Person p " +
"set p.name = :newName " +
"where p.name = :oldName" )
.setParameter( "oldName", oldName )
.setParameter( "newName", newName )
.executeUpdate();
In keeping with the EJB3 specification, HQL UPDATE statements, by default, do not effect the version or the timestamp property
values for the affected entities. You can use a versioned update to force Hibernate to reset the version or timestamp property
values, by adding the VERSIONED keyword after the UPDATE keyword.
JAVA
int updatedEntities = session.createQuery(
"update versioned Person " +
"set name = :newName " +
"where name = :oldName" )
.setParameter( "oldName", oldName )
.setParameter( "newName", newName )
.executeUpdate();
If you use the VERSIONED statement, you cannot use custom version types, which use class
org.hibernate.usertype.UserVersionType .
This feature is only available in HQL since it’s not standardized by JPA.
JAVA
int deletedEntities = entityManager.createQuery(
"delete Person p " +
"where p.name = :name" )
.setParameter( "name", name )
.executeUpdate();
JAVA
int deletedEntities = session.createQuery(
"delete Person " +
"where name = :name" )
.setParameter( "name", name )
.executeUpdate();
Method Query.executeUpdate() returns an int value, which indicates the number of entities affected by the operation. This
may or may not correlate to the number of rows affected in the database. A JPQL/HQL bulk operation might result in multiple SQL
statements being executed, such as for joined-subclass. In the example of joined-subclass, a DELETE against one of the subclasses
may actually result in deletes in the tables underlying the join, or further down the inheritance hierarchy.
JAVA
INSERT INTO EntityName
properties_list
SELECT properties_list
FROM ...
Only the INSERT INTO … SELECT … form is supported. You cannot specify explicit values to insert.
The properties_list is analogous to the column specification in the SQL INSERT statement. For entities involved in mapped
inheritance, you can only use properties directly defined on that given class-level in the properties_list . Superclass properties
are not allowed and subclass properties are irrelevant. In other words, INSERT statements are inherently non-polymorphic.
The SELECT statement can be any valid HQL select query, but the return types must match the types expected by the INSERT.
Hibernate verifies the return types during query compilation, instead of expecting the database to check it. Problems might result
from Hibernate types which are equivalent, rather than equal. One such example is a mismatch between a property defined as an
org.hibernate.type.DateType and a property defined as an org.hibernate.type.TimestampType , even though the
database may not make a distinction, or may be capable of handling the conversion.
If id property is not specified in the properties_list , Hibernate generates a value automatically. Automatic generation is only
available if you use ID generators which operate on the database. Otherwise, Hibernate throws an exception during parsing.
Available in-database generators are org.hibernate.id.SequenceGenerator and its subclasses, and objects which implement
org.hibernate.id.PostInsertIdentifierGenerator .
For properties mapped as either version or timestamp, the insert statement gives you two options. You can either specify the
property in the properties_list, in which case its value is taken from the corresponding select expressions or omit it from the
properties_list, in which case the seed value defined by the org.hibernate.type.VersionType is used.
JAVA
int insertedEntities = session.createQuery(
"insert into Partner (id, name) " +
"select p.id, p.name " +
"from Person p ")
.executeUpdate();
This section is only a brief overview of HQL. For more information, see HQL.
Class diagram
The Person entity is the base class of this entity inheritance model, and is mapped as follows:
JAVA
@Entity(name = "Person")
@Inheritance(strategy = InheritanceType .JOINED)
public static class Person implements Serializable {
@Id
private Integer id;
@Id
private String companyName;
Both the Doctor and Engineer entity classes extend the Person base class:
@Entity(name = "Doctor")
public static class Doctor extends Person {
}
@Entity(name = "Engineer")
public static class Engineer extends Person {
JAVA
int updateCount = session.createQuery(
"delete from Person where employed = :employed" )
.setParameter( "employed", false )
.executeUpdate();
insert
into
HT_Person
select
p.id as id,
p.companyName as companyName
from
Person p
where
p.employed = ?
delete
from
Engineer
where
(
id, companyName
) IN (
select
id,
companyName
from
HT_Person
)
delete
from
Doctor
where
(
id, companyName
) IN (
select
id,
companyName
from
HT_Person
)
delete
from
Person
where
(
id, companyName
) IN (
select
id,
companyName
from
HT_Person
)
HT_Person is a temporary table that Hibernate creates to hold all the entity identifiers that are to be updated or deleted by the
bulk id operation. The temporary table can be either global or local, depending on the underlying database capabilities.
As the HHH-11262 (https://hibernate.atlassian.net/browse/HHH-11262) issue describes, there are use cases when the application
developer cannot use temporary tables because the database user lacks this privilege.
In this case, we defined several options which you can choose depending on your database capabilities:
InlineIdsInClauseBulkIdStrategy
InlineIdsSubSelectValueListBulkIdStrategy
InlineIdsOrClauseBulkIdStrategy
CteValuesListBulkIdStrategy
InlineIdsInClauseBulkIdStrategy
To use this strategy, you need to configure the following configuration property:
XML
<property name="hibernate.hql.bulk_id_strategy"
value="org.hibernate.hql.spi.id.inline.InlineIdsInClauseBulkIdStrategy"
/>
Now, when running the previous test case, Hibernate generates the following SQL statements:
select
p.id as id,
p.companyName as companyName
from
Person p
where
p.employed = ?
delete
from
Engineer
where
( id, companyName )
in (
( 1,'Red Hat USA' ),
( 3,'Red Hat USA' ),
( 1,'Red Hat Europe' ),
( 3,'Red Hat Europe' )
)
delete
from
Doctor
where
( id, companyName )
in (
( 1,'Red Hat USA' ),
( 3,'Red Hat USA' ),
( 1,'Red Hat Europe' ),
( 3,'Red Hat Europe' )
)
delete
from
Person
where
( id, companyName )
in (
( 1,'Red Hat USA' ),
( 3,'Red Hat USA' ),
( 1,'Red Hat Europe' ),
( 3,'Red Hat Europe' )
)
So, the entity identifiers are selected first and used for each particular update or delete statement.
The IN clause row value expression has long been supported by Oracle, PostgreSQL, and nowadays
by MySQL 5.7. However, SQL Server 2014 does not support this syntax, so you’ll have to use a
different strategy.
InlineIdsSubSelectValueListBulkIdStrategy
To use this strategy, you need to configure the following configuration property:
<property name="hibernate.hql.bulk_id_strategy"
value="org.hibernate.hql.spi.id.inline.InlineIdsSubSelectValueListBulkIdStrategy"
/>
Now, when running the previous test case, Hibernate generates the following SQL statements:
select
p.id as id,
p.companyName as companyName
from
Person p
where
p.employed = ?
delete
from
Engineer
where
( id, companyName ) in (
select
id,
companyName
from (
values
( 1,'Red Hat USA' ),
( 3,'Red Hat USA' ),
( 1,'Red Hat Europe' ),
( 3,'Red Hat Europe' )
) as HT
(id, companyName)
)
delete
from
Doctor
where
( id, companyName ) in (
select
id,
companyName
from (
values
( 1,'Red Hat USA' ),
( 3,'Red Hat USA' ),
( 1,'Red Hat Europe' ),
( 3,'Red Hat Europe' )
) as HT
(id, companyName)
)
delete
from
Person
where
( id, companyName ) in (
select
id,
companyName
from (
values
( 1,'Red Hat USA' ),
( 3,'Red Hat USA' ),
( 1,'Red Hat Europe' ),
( 3,'Red Hat Europe' )
) as HT
(id, companyName)
)
The underlying database must support the VALUES list clause, like PostgreSQL or SQL Server 2008. However,
this strategy requires the IN-clause row value expression for composite identifiers so you can use this strategy
only with PostgreSQL.
InlineIdsOrClauseBulkIdStrategy
To use this strategy, you need to configure the following configuration property:
XML
<property name="hibernate.hql.bulk_id_strategy"
value="org.hibernate.hql.spi.id.inline.InlineIdsOrClauseBulkIdStrategy"
/>
Now, when running the previous test case, Hibernate generates the following SQL statements:
SQL
select
p.id as id,
p.companyName as companyName
from
Person p
where
p.employed = ?
delete
from
Engineer
where
( id = 1 and companyName = 'Red Hat USA' )
or ( id = 3 and companyName = 'Red Hat USA' )
or ( id = 1 and companyName = 'Red Hat Europe' )
or ( id = 3 and companyName = 'Red Hat Europe' )
delete
from
Doctor
where
( id = 1 and companyName = 'Red Hat USA' )
or ( id = 3 and companyName = 'Red Hat USA' )
or ( id = 1 and companyName = 'Red Hat Europe' )
or ( id = 3 and companyName = 'Red Hat Europe' )
delete
from
Person
where
( id = 1 and companyName = 'Red Hat USA' )
or ( id = 3 and companyName = 'Red Hat USA' )
or ( id = 1 and companyName = 'Red Hat Europe' )
or ( id = 3 and companyName = 'Red Hat Europe' )
This strategy has the advantage of being supported by all the major relational database systems (e.g.
Oracle, SQL Server, MySQL, and PostgreSQL).
CteValuesListBulkIdStrategy
To use this strategy, you need to configure the following configuration property:
XML
<property name="hibernate.hql.bulk_id_strategy"
value="org.hibernate.hql.spi.id.inline.CteValuesListBulkIdStrategy"
/>
Now, when running the previous test case, Hibernate generates the following SQL statements:
select
p.id as id,
p.companyName as companyName
from
Person p
where
p.employed = ?
HT_Person
)
The underlying database must support the CTE (Common Table Expressions) that can be referenced
from non-query statements as well, like PostgreSQL since 9.1 or SQL Server since 2005. The
underlying database must also support the VALUES list clause, like PostgreSQL or SQL Server 2008.
However, this strategy requires the IN-clause row value expression for composite identifiers, so you can only
use this strategy only with PostgreSQL.
If you can use temporary tables, that’s probably the best choice. However, if you are not allowed to create temporary tables, you
must pick one of these four strategies that works with your underlying database. Before making your mind, you should
benchmark which one works best for your current workload. For instance, CTE are optimization fences in PostgreSQL
(http://blog.2ndquadrant.com/postgresql-ctes-are-optimization-fences/), so make sure you measure before taking a decision.
If you’re using Oracle or MySQL 5.7, you can choose either InlineIdsOrClauseBulkIdStrategy or
InlineIdsInClauseBulkIdStrategy . For older version of MySQL, then you can only use InlineIdsOrClauseBulkIdStrategy .
If you’re using SQL Server, InlineIdsOrClauseBulkIdStrategy is the only option for you.
If you’re using PostgreSQL, then you have the luxury of choosing any of these four strategies.
13. Caching
At runtime, Hibernate handles moving data into and out of the second-level cache in response to the operations performed by the
Session , which acts as a transaction-level cache of persistent data. Once an entity becomes managed, that object is added to the
internal cache of the current persistence context ( EntityManager or Session ). The persistence context is also called the first-
level cache, and it’s enabled by default.
It is possible to configure a JVM-level ( SessionFactory -level) or even a cluster cache on a class-by-class and collection-by-
collection basis.
Be aware that caches are not aware of changes made to the persistent store by other applications.
They can, however, be configured to regularly expire cached data.
13.1.1. RegionFactory
org.hibernate.cache.spi.RegionFactory defines the integration between Hibernate and a pluggable caching provider.
hibernate.cache.region.factory_class is used to declare the provider to use. Hibernate comes with built-in support for the
Java caching standard JCache and also two popular caching libraries: Ehcache and Infinispan. Detailed information is provided
later in this chapter.
hibernate.cache.use_second_level_cache
Enable or disable second level caching overall. The default is true, although the default region factory is
NoCachingRegionFactory .
hibernate.cache.use_query_cache
Enable or disable second level caching of query results. The default is false.
hibernate.cache.query_cache_factory
Query result caching is handled by a special contract that deals with staleness-based invalidation of the results. The default
implementation does not allow stale results at all. Use this for applications that would like to relax that. Names an
implementation of org.hibernate.cache.spi.QueryCacheFactory
hibernate.cache.use_minimal_puts
Optimizes second-level cache operations to minimize writes, at the cost of more frequent reads. Providers typically set this
appropriately.
hibernate.cache.region_prefix
hibernate.cache.default_cache_concurrency_strategy
In Hibernate second-level caching, all regions can be configured differently including the concurrency strategy to use when
accessing that particular region. This setting allows defining a default strategy to be used. This setting is very rarely required as
the pluggable providers do specify the default strategy to use. Valid values include:
read-only,
read-write,
nonstrict-read-write,
transactional
hibernate.cache.use_structured_entries
If true , forces Hibernate to store data in the second-level cache in a more human-friendly format. Can be useful if you’d like to
be able to "browse" the data directly in your cache, but does have a performance impact.
hibernate.cache.auto_evict_collection_cache
Enables or disables the automatic eviction of a bidirectional association’s collection cache entry when the association is
changed just from the owning side. This is disabled by default, as it has a performance impact to track this state. However, if
your application does not manage both sides of bidirectional association where the collection side is cached, the alternative is
to have stale data in that collection cache.
hibernate.cache.use_reference_entries
Enable direct storage of entity references into the second level cache for read-only or immutable entities.
hibernate.cache.keys_factory
When storing entries into the second-level cache as a key-value pair, the identifiers can be wrapped into tuples <entity type,
tenant, identifier> to guarantee uniqueness in case that second-level cache stores all entities in single space. These tuples are
then used as keys in the cache. When the second-level cache implementation (incl. its configuration) guarantees that different
entity types are stored separately and multi-tenancy is not used, you can omit this wrapping to achieve better performance.
Currently, this property is only supported when Infinispan is configured as the second-level cache implementation. Valid values
are:
By default, entities are not part of the second level cache and we recommend you to stick to this setting. However, you can
override this by setting the shared-cache-mode element in your persistence.xml file or by using the
javax.persistence.sharedCache.mode property in your configuration file. The following values are possible:
Entities are not cached unless explicitly marked as cacheable (with the @Cacheable
(https://javaee.github.io/javaee-spec/javadocs/javax/persistence/Cacheable.html) annotation).
DISABLE_SELECTIVE
ALL
NONE
No entity is cached even if marked as cacheable. This option can make sense to disable second-level cache altogether.
The cache concurrency strategy used by default can be set globally via the
hibernate.cache.default_cache_concurrency_strategy configuration property. The values for this property are:
read-only
If your application needs to read, but not modify, instances of a persistent class, a read-only cache is the best choice. Application
can still delete entities and these changes should be reflected in second-level cache so that the cache does not provide stale
entities. Implementations may use performance optimizations based on the immutability of entities.
read-write
If the application needs to update data, a read-write cache might be appropriate. This strategy provides consistent access to
single entity, but not a serializable transaction isolation level; e.g. when TX1 reads looks up an entity and does not find it, TX2
inserts the entity into cache and TX1 looks it up again, the new entity can be read in TX1.
nonstrict-read-write
Similar to read-write strategy but there might be occasional stale reads upon concurrent access to an entity. The choice of this
strategy might be appropriate if the application rarely updates the same data simultaneously and strict transaction isolation is
not required. Implementations may use performance optimizations that make use of the relaxed consistency guarantee.
transactional
Provides serializable transaction isolation level.
Rather than using a global cache concurrency strategy, it is recommended to define this setting on a
per entity basis. Use the @org.hibernate.annotations.Cache
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/C ache.html) annotation for
that.
usage
Defines the CacheConcurrencyStrategy
region
Defines a cache region where entries will be stored
include
If lazy properties should be included in the second level cache. The default value is all so lazy properties are cacheable. The
other possible value is non-lazy so lazy properties are not cacheable.
Although we still believe that all entities belonging to a given entity hierarchy should share the same caching semantics, the JPA
specification says that the @Cacheable annotation could be overwritten by a subclass:
“ The value of the Cacheable annotation is inherited by subclasses; it can be overridden by specifying
Cacheable on a subclass.
— Section 11.1.7 of the JPA 2.1 Specification
As of Hibernate ORM 5.3, it’s now possible to possible to override a base class @Cacheable or @Cache
definition in subclasses.
However, the Hibernate cache concurrency strategy (e.g. read-only, nonstrict-read-write, read-write,
transactional) is still defined at the root entity level and cannot be overridden.
Nevertheless, the reasons why we advise you to have all entities belonging to an inheritance tree share the same caching
definition can be summed as follows:
from a performance perspective, adding an additional check on a per entity type level slows the bootstrap process.
providing different caching semantics for subclasses would violate the Liskov substitution principle
(https://en.wikipedia.org/wiki/Liskov_substitution_principle).
JAVA
@Entity(name = "Phone")
@Cacheable
@org.hibernate.annotations.Cache (usage = CacheConcurrencyStrategy .NONSTRICT_READ_WRITE)
public static class Phone {
@Id
@GeneratedValue
private Long id;
@ManyToOne
private Person person;
@Version
private int version;
Hibernate stores cached entities in a dehydrated form, which is similar to the database representation. Aside from the foreign key
column values of the @ManyToOne or @OneToOne child-side associations, entity relationships are not stored in the cache,
Once an entity is stored in the second-level cache, you can avoid a database hit and load the entity from the cache alone:
JAVA
Person person = entityManager.find( Person .class , 1L );
JAVA
Person person = session.get( Person .class , 1L );
The Hibernate second-level cache can also load entities by their natural id:
JAVA
@Entity(name = "Person")
@Cacheable
@org.hibernate.annotations.Cache (usage = CacheConcurrencyStrategy .READ_WRITE)
public static class Person {
@Id
@GeneratedValue(strategy = GenerationType .AUTO)
private Long id;
@NaturalId
@Column(name = "code", unique = true)
private String code;
JAVA
Person person = session
.byNaturalId( Person .class )
.using ( "code", "unique-code")
.load();
If the collection is made of value types (basic or embeddables mapped with @ElementCollection ), the collection is stored as
such. If the collection contains other entities ( @OneToMany or @ManyToMany ), the collection cache entry will store the entity
identifiers only.
JAVA
@OneToMany(mappedBy = "person", cascade = CascadeType .ALL)
@org.hibernate.annotations.Cache (usage = CacheConcurrencyStrategy .NONSTRICT_READ_WRITE)
private List<Phone > phones = new ArrayList <>( );
Collections are read-through, meaning they are cached upon being accessed for the first time:
JAVA
Person person = entityManager.find( Person .class , 1L );
person.getPhones().size();
Subsequent collection retrievals will use the cache instead of going to the database.
The collection cache is not write-through so any modification will trigger a collection cache entry
invalidation. On a subsequent access, the collection will be loaded from the database and re-cached.
Caching of query results introduces some overhead in terms of your applications normal transactional
processing. For example, if you cache results of a query against Person , Hibernate will need to keep
track of when those results should be invalidated because changes have been committed against any
Person entity.
That, coupled with the fact that most applications simply gain no benefit from caching query results, leads
Hibernate to disable caching of query results by default.
To use query caching, you will first need to enable it with the following configuration property:
XML
<property
name="hibernate.cache.use_query_cache"
value="true" />
As mentioned above, most queries do not benefit from caching or their results. So by default, individual queries are not cached
even after enabling query caching. Each particular query that needs to be cached must be manually set as cacheable. This way,
the query looks for existing cache results or adds the query results to the cache when being executed.
JAVA
List<Person > persons = entityManager.createQuery(
"select p " +
"from Person p " +
"where p.name = :name", Person .class )
.setParameter( "name", "John Doe")
.setHint( "org.hibernate.cacheable", "true")
.getResultList();
JAVA
List<Person > persons = session.createQuery(
"select p " +
"from Person p " +
"where p.name = :name")
.setParameter( "name", "John Doe")
.setCacheable(true)
.list();
The query cache does not cache the state of the actual entities in the cache; it caches only identifier
values and results of value type.
Just as with collection caching, the query cache should always be used in conjunction with the second-
level cache for those entities expected to be cached as part of a query result cache.
default-query-results-region
default-update-timestamps-region
Holding timestamps of the most recent updates to queryable tables. These are used to validate the results as they are served
from the query cache.
If you configure your underlying cache implementation to use expiration, it’s very important that the
timeout of the underlying cache region for the default-update-timestamps-region is set to a higher
value than the timeouts of any of the query caches.
In fact, we recommend that the default-update-timestamps-region region is not configured for expiration
(time-based) or eviction (size/memory-based) at all. Note that an LRU (Least Recently Used) cache eviction
policy is never appropriate for this particular cache region.
If you require fine-grained control over query cache expiration policies, you can specify a named cache region for a particular
query.
Example 444. Caching query in custom region using Hibernate native API
JAVA
List<Person > persons = session.createQuery(
"select p " +
"from Person p " +
"where p.id > :id")
.setParameter( "id", 0L)
.setCacheable(true)
.setCacheRegion( "query.cache.person" )
.list();
If you want to force the query cache to refresh one of its regions (disregarding any cached results it finds there), you can use
custom cache modes.
JAVA
List<Person > persons = entityManager.createQuery(
"select p " +
"from Person p " +
"where p.id > :id", Person .class )
.setParameter( "id", 0L)
.setHint( QueryHints .HINT_CACHEABLE, "true")
.setHint( QueryHints .HINT_CACHE_REGION, "query.cache.person" )
.setHint( "javax.persistence.cache.storeMode", CacheStoreMode .REFRESH )
.getResultList();
Example 446. Using custom query cache mode with Hibernate native API
JAVA
List<Person > persons = session.createQuery(
"select p " +
"from Person p " +
"where p.id > :id")
.setParameter( "id", 0L)
.setCacheable(true)
.setCacheRegion( "query.cache.person" )
.setCacheMode( CacheMode .REFRESH )
.list();
This is particularly useful in cases where underlying data may have been updated via a separate process and is
a far more efficient alternative to the bulk eviction of the region via SessionFactory eviction which looks as
follows:
JAVA
session.getSessionFactory().getCache().evictQueryRegion( "query.cache.person" );
The relationship between Hibernate and JPA cache modes can be seen in the following table:
CacheMode.GET CacheStoreMode.BYPASS and Read from the cache, but doesn’t write
CacheRetrieveMode.USE to cache
Setting the cache mode can be done either when loading entities directly or when executing a query.
JAVA
Map<String , Object > hints = new HashMap <>( );
hints.put( "javax.persistence.cache.retrieveMode " , CacheRetrieveMode .USE );
hints.put( "javax.persistence.cache.storeMode" , CacheStoreMode .REFRESH );
Person person = entityManager.find( Person .class , 1L , hints);
Example 448. Using custom cache modes with Hibernate native API
JAVA
session.setCacheMode( CacheMode .REFRESH );
Person person = session.get( Person .class , 1L );
Example 449. Using custom cache modes for queries with JPA
JAVA
List<Person > persons = entityManager.createQuery(
"select p from Person p", Person .class )
.setHint( QueryHints .HINT_CACHEABLE, "true")
.setHint( "javax.persistence.cache.retrieveMode " , CacheRetrieveMode .USE )
.setHint( "javax.persistence.cache.storeMode" , CacheStoreMode .REFRESH )
.getResultList();
Example 450. Using custom cache modes for queries with Hibernate native API
JAVA
List<Person > persons = session.createQuery(
"select p from Person p" )
.setCacheable( true )
.setCacheMode( CacheMode .REFRESH )
.list();
JAVA
entityManager.getEntityManagerFactory().getCache().evict( Person .class );
Hibernate is much more flexible in this regard as it offers fine-grained control over what needs to be evicted. The
org.hibernate.Cache (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/Cache.html) interface defines various evicting
strategies:
collections (by the region, and it might take the collection owner identifier as well)
JAVA
session.getSessionFactory().getCache().evictQueryRegion( "query.cache.person" );
This way, you can get access to the Statistics (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/stat/Statistics.html) class
which comprises all sort of second-level cache metrics.
JAVA
Statistics statistics = session.getSessionFactory().getStatistics();
CacheRegionStatistics secondLevelCacheStatistics =
statistics.getDomainDataRegionStatistics( "query.cache.person" );
long hitCount = secondLevelCacheStatistics.getHitCount();
long missCount = secondLevelCacheStatistics.getMissCount();
double hitRatio = (double ) hitCount / ( hitCount + missCount );
13.9. JCache
Use of the built-in integration for JCache (https://jcp.org/en/jsr/detail?id=107) requires that the
hibernate-jcache module jar (and all of its dependencies) are on the classpath. In addition a JCache
implementation needs to be added as well. A list of compatible implementations can be found on the
JCP website (https://jcp.org/aboutJava/communityprocess/implementations/jsr107/index.html). An alternative source of
compatible implementations can be found through the JSR-107 test zoo (https://github.com/cruftex/jsr107-test-zoo).
13.9.1. RegionFactory
The hibernate-jcache module defines the following region factory: JCacheRegionFactory .
To use the JCacheRegionFactory , you need to specify the following configuration property:
XML
<property
name="hibernate.cache.region.factory_class"
value="jcache"/>
If you do not specify additional properties, the JCacheRegionFactory will load the default JCache provider and create the
default CacheManager . Also, Cache s will be created using the default javax.cache.configuration.MutableConfiguration .
In order to control which provider to use and specify configuration for the CacheManager and Cache s you can use the following
two properties:
<property
name="hibernate.javax.cache.provider"
value="org.ehcache.jsr107.EhcacheCachingProvider"/>
<property
name="hibernate.javax.cache.uri"
value="file:/path/to/ehcache.xml"/>
Only by specifying the second property hibernate.javax.cache.uri will you be able to have a CacheManager per
SessionFactory .
You may change this behavior by setting the hibernate.javax.cache.missing_cache_strategy property to one of the
following values:
Value Description
create-warn Default value. Create a new cache when a cache is not found
(see create below), and also log a warning about the
missing cache.
Note that caches created this way may be very badly configured (unlimited size and no eviction in
particular) unless the cache provider was explicitly configured to use a specific configuration for default
caches.
Ehcache, in particular, allows to set such default configuration using cache templates, see
http://www.ehcache.org/documentation/3.0/107.html#supplement-jsr-107-configurations
13.10. Ehcache
This integration covers Ehcache 2.x, in order to use Ehcache 3.x as second level cache, refer to the JCache integration.
Use of the built-in integration for Ehcache (http://www.ehcache.org/) requires that the hibernate-
ehcache module jar (and all of its dependencies) are on the classpath.
13.10.1. RegionFactory
The hibernate-ehcache module defines two specific region factories: EhCacheRegionFactory and
SingletonEhCacheRegionFactory .
EhCacheRegionFactory
To use the EhCacheRegionFactory , you need to specify the following configuration property:
XML
<property
name="hibernate.cache.region.factory_class"
value="ehcache"/>
SingletonEhCacheRegionFactory
To use the SingletonEhCacheRegionFactory , you need to specify the following configuration property:
XML
<property
name="hibernate.cache.region.factory_class"
value="ehcache-singleton"/>
started in the underlying cache manager. Thus if you configure an entity type or a collection as cached, but do not configure the
corresponding cache explicitly, one warning will be logged for each cache that was not configured explicitly.
You may change this behavior by setting the hibernate.cache.ehcache.missing_cache_strategy property to one of the
following values:
Value Description
create-warn Default value. Create a new cache when a cache is not found
(see create below), and also log a warning about the
missing cache.
Note that caches created this way may be very badly configured (large size in particular) unless an
appropriate <defaultCache> entry is added to the Ehcache configuration.
13.11. Infinispan
Infinispan is a distributed in-memory key/value data store, available as a cache or data grid, which can be used as a Hibernate
2nd-level cache provider as well.
It supports advanced functionality such as transactions, events, querying, distributed processing, off-heap and geographical
failover.
It is useful for the application to react to certain events that occur inside Hibernate. This allows for the implementation of generic
functionality and the extension of Hibernate functionality.
14.1. Interceptors
The org.hibernate.Interceptor interface provides callbacks from the session to the application, allowing the application to
inspect and/or manipulate properties of a persistent object before it is saved, updated, deleted or loaded.
One possible use for this is to track auditing information. The following example shows an Interceptor implementation that
automatically logs when an entity is updated.
JAVA
public static class LoggingInterceptor extends EmptyInterceptor {
@Override
public boolean onFlushDirty(
Object entity,
Serializable id,
Object [] currentState,
Object [] previousState,
String [] propertyNames,
Type[] types) {
LOGGER.debugv( "Entity {0}#{1} changed from {2} to {3}",
entity.getClass().getSimpleName(),
id,
Arrays .toString( previousState ),
Arrays .toString( currentState )
);
return super .onFlushDirty( entity, id, currentState,
previousState, propertyNames, types
);
}
}
JAVA
SessionFactory sessionFactory = entityManagerFactory.unwrap( SessionFactory .class );
Session session = sessionFactory
.withOptions()
.interceptor(new LoggingInterceptor () )
.openSession();
session.getTransaction().begin ();
session.getTransaction().commit();
A SessionFactory -scoped interceptor is registered with the Configuration object prior to building the SessionFactory .
Unless a session is opened explicitly specifying the interceptor to use, the SessionFactory -scoped interceptor will be applied to
all sessions opened from that SessionFactory . SessionFactory -scoped interceptors must be thread-safe. Ensure that you do
not store session-specific states since multiple sessions will use this interceptor potentially concurrently.
Many methods of the Session interface correlate to an event type. The full range of defined event types is declared as enum
values on org.hibernate.event.spi.EventType . When a request is made of one of these methods, the Session generates an
appropriate event and passes it to the configured event listener(s) for that type.
Applications can customize the listener interfaces (i.e., the LoadEvent is processed by the registered implementation of the
LoadEventListener interface), in which case their implementations would be responsible for processing the load() requests
made of the Session .
The listeners should be considered stateless; they are shared between requests, and should not
save any state as instance variables.
A custom listener implements the appropriate interface for the event it wants to process and/or extend one of the convenience
base classes (or even the default event listeners used by Hibernate out-of-the-box as these are declared non-final for this purpose).
JAVA
EntityManagerFactory entityManagerFactory = entityManagerFactory();
SessionFactoryImplementor sessionFactory = entityManagerFactory.unwrap( SessionFactoryImplementor .class );
sessionFactory
.getServiceRegistry()
.getService( EventListenerRegistry .class )
.prependListeners( EventType .LOAD, new SecuredLoadEntityListener () );
1. you provide a custom Interceptor , which is taken into consideration by the default Hibernate event listeners. For example,
the Interceptor#onSave() method is invoked by Hibernate AbstractSaveEventListener . Or, the
Interceptor#onLoad() is called by the DefaultPreLoadEventListener .
2. you can replace any given default event listener with your own implementation. When doing this, you should probably
extend the default listeners because otherwise, you’d have to take care of all the low-level entity state transition logic. For
example, if you replace the DefaultPreLoadEventListener with your own implementation, then, only if you call the
Interceptor#onLoad() method explicitly, you can mix the custom load event listener with a custom Hibernate interceptor.
First, you must configure the appropriate event listeners, to enable the use of JACC authorization. Again, see Event Listener
Registration for the details.
@Override
public Action getAction() {
return Action .KEEP_ORIGINAL;
}
};
@Override
public void prepareServices(
StandardServiceRegistryBuilder serviceRegistryBuilder) {
boolean isSecurityEnabled = serviceRegistryBuilder
.getSettings().containsKey( AvailableSettings .JACC_ENABLED );
final JaccService jaccService = isSecurityEnabled ?
new StandardJaccServiceImpl () : new DisabledJaccServiceImpl ();
serviceRegistryBuilder.addService( JaccService .class , jaccService );
}
@Override
public void integrate(
Metadata metadata,
SessionFactoryImplementor sessionFactory,
SessionFactoryServiceRegistry serviceRegistry) {
doIntegration(
serviceRegistry
.getService( ConfigurationService .class ).getSettings(),
// pass no permissions here, because atm actually injecting the
// permissions into the JaccService is handled on SessionFactoryImpl via
// the org.hibernate.boot.cfgxml.spi.CfgXmlAccessService
null,
serviceRegistry
);
}
if ( permissionDeclarations != null ) {
for ( GrantedPermission declaration : permissionDeclarations
.getPermissionDeclarations() ) {
jaccService.addPermission( declaration );
}
}
eventListenerRegistry.prependListeners(
EventType .PRE_DELETE, new JaccPreDeleteEventListener () );
eventListenerRegistry.prependListeners(
EventType .PRE_INSERT, new JaccPreInsertEventListener () );
eventListenerRegistry.prependListeners(
EventType .PRE_UPDATE, new JaccPreUpdateEventListener () );
eventListenerRegistry.prependListeners(
EventType .PRE_LOAD, new JaccPreLoadEventListener () );
}
@Override
public void disintegrate(SessionFactoryImplementor sessionFactory,
SessionFactoryServiceRegistry serviceRegistry) {
// nothing to do
}
}
You must also decide how to configure your JACC provider. Consult your JACC provider documentation.
Type Description
Type Description
@PostLoad Executed after an entity has been loaded into the current
persistence context or an entity has been refreshed.
There are two available approaches defined for specifying callback handling:
The first approach is to annotate methods on the entity itself to receive notifications of a particular entity lifecycle event(s).
The second is to use a separate entity listener class. An entity listener is a stateless class with a no-arg constructor. The callback
annotations are placed on a method of this class instead of the entity class. The entity listener class is then associated with the
entity using the javax.persistence.EntityListeners annotation
JAVA
@Entity
@EntityListeners( LastUpdateListener .class )
public static class Person {
@Id
private Long id;
@Transient
private long age;
/**
* Set the transient property at load time based on a calculation.
* Note that a native Hibernate formula mapping is better for this purpose.
*/
@PostLoad
public void calculateAge() {
age = ChronoUnit .YEARS.between( LocalDateTime .ofInstant(
Instant .ofEpochMilli( dateOfBirth.getTime()), ZoneOffset .UTC),
LocalDateTime .now()
);
}
}
@PreUpdate
@PrePersist
public void setLastUpdate( Person p ) {
p.setLastUpdate( new Date() );
}
}
These approaches can be mixed, meaning you can use both together.
Regardless of whether the callback method is defined on the entity or on an entity listener, it must have a void-return signature.
The name of the method is irrelevant as it is the placement of the callback annotations that makes the method a callback. In the
case of callback methods defined on the entity class, the method must additionally have a no-argument signature. For callback
methods defined on an entity listener class, the method must have a single argument signature; the type of that argument can be
either java.lang.Object (to facilitate attachment to multiple entities) or the specific entity type.
A callback method can throw a RuntimeException . If the callback method does throw a RuntimeException , then the current
transaction, if any, must be rolled back.
It is possible that multiple callback methods are defined for a particular lifecycle event. When that is the case, the defined order of
execution is well defined by the JPA spec (specifically section 3.5.4):
Any default listeners associated with the entity are invoked first, in the order they were specified in the XML. See the
javax.persistence.ExcludeDefaultListeners annotation.
Next, entity listener class callbacks associated with the entity hierarchy are invoked, in the order they are defined in the
EntityListeners . If multiple classes in the entity hierarchy define entity listeners, the listeners defined for a superclass are
invoked before the listeners defined for its subclasses. See the `javax.persistence.ExcludeSuperclassListener`s annotation.
Lastly, callback methods defined on the entity hierarchy are invoked. If a callback type is annotated on both an entity and one
or more of its superclasses without method overriding, both would be called, the most general superclass first. An entity class
is also allowed to override a callback method defined in a superclass in which case the super callback would not get invoked;
the overriding method would get invoked provided it is annotated.
JAVA
public class DefaultEntityListener {
<entity-mappings xmlns="http://xmlns.jcp.org/xml/ns/persistence/orm"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence/orm
http://xmlns.jcp.org/xml/ns/persistence/orm_2_1.xsd"
version="2.1">
<persistence-unit-metadata>
<persistence-unit-defaults>
<entity-listeners>
<entity-listener
class="org.hibernate.userguide.events.DefaultEntityListener">
<pre-persist method-name="onPersist"/>
<pre-update method-name="onUpdate"/>
</entity-listener>
</entity-listeners>
</persistence-unit-defaults>
</persistence-unit-metadata>
</entity-mappings>
JAVA
@MappedSuperclass
public abstract class BaseEntity {
JAVA
@Entity(name = "Person")
public static class Person extends BaseEntity {
@Id
private Long id;
@Entity(name = "Book")
public static class Book extends BaseEntity {
@Id
private Long id;
@ManyToOne
private Person author;
When persisting a Person or Book entity, the createdOn is going to be set by the onPersist method of the
DefaultEntityListener .
JAVA
Person author = new Person ();
author.setId( 1L );
author.setName( "Vlad Mihalcea" );
entityManager.persist( author );
entityManager.persist( book );
SQL
insert
into
Person
(createdOn, updatedOn, name, id)
values
(?, ?, ?, ?)
insert
into
Book
(createdOn, updatedOn, author_id, title, id)
values
(?, ?, ?, ?, ?)
When updating a Person or Book entity, the updatedOn is going to be set by the onUpdate method of the
DefaultEntityListener .
JAVA
Person author = entityManager.find( Person .class , 1L );
author.setName( "Vlad-Alexandru Mihalcea" );
update
Person
set
createdOn=?,
updatedOn=?,
name=?
where
id=?
update
Book
set
createdOn=?,
updatedOn=?,
author_id=?,
title=?
where
id=?
@ExcludeDefaultListeners instructs the current class to ignore the default entity listeners for the current entity while
@ExcludeSuperclassListeners is used to ignore the default entity listeners propagated to the BaseEntity super-class.
JAVA
@Entity(name = "Publisher")
@ExcludeDefaultListeners
@ExcludeSuperclassListeners
public static class Publisher extends BaseEntity {
@Id
private Long id;
When persisting a Publisher entity, the createdOn is not going to be set by the onPersist method of the
DefaultEntityListener because the Publisher entity was marked with the @ExcludeDefaultListeners and
@ExcludeSuperclassListeners annotations.
JAVA
Publisher publisher = new Publisher ();
publisher.setId( 1L );
publisher.setName( "Amazon" );
entityManager.persist( publisher );
SQL
insert
into
Publisher
(createdOn, updatedOn, name, id)
values
(?, ?, ?, ?)
The Hibernate Query Language (HQL) and Java Persistence Query Language (JPQL) are both object model focused query
languages similar in nature to SQL. JPQL is a heavily-inspired-by subset of HQL. A JPQL query is always a valid HQL query, the
reverse is not true, however.
Both HQL and JPQL are non-type-safe ways to perform query operations. Criteria queries offer a type-safe approach to querying.
See Criteria for more information.
@NamedQueries({
@NamedQuery(
name = "get_person_by_name",
query = "select p from Person p where name = :name"
)
,
@NamedQuery(
name = "get_read_only_person_by_name",
query = "select p from Person p where name = :name",
hints = {
@QueryHint(
name = "org.hibernate.readOnly",
value = "true"
)
}
)
})
@NamedStoredProcedureQueries(
@NamedStoredProcedureQuery(
name = "sp_person_phones",
procedureName = "sp_person_phones",
parameters = {
@StoredProcedureParameter(
name = "personId",
type = Long.class ,
mode = ParameterMode .IN
),
@StoredProcedureParameter(
name = "personPhones",
type = Class .class ,
mode = ParameterMode .REF_CURSOR
)
}
)
)
@Entity
public class Person {
@Id
@GeneratedValue
private Long id;
@Temporal(TemporalType .TIMESTAMP )
private Date createdOn;
@ElementCollection
@MapKeyEnumerated(EnumType .STRING)
private Map<AddressType , String > addresses = new HashMap <>();
@Version
private int version;
@Entity
public class Partner {
@Id
@GeneratedValue
private Long id;
@Version
private int version;
@Entity
public class Phone {
@Id
private Long id;
@Column(name = "phone_number")
private String number;
@Enumerated(EnumType .STRING)
@Column(name = "phone_type")
private PhoneType type;
@OneToMany(mappedBy = "phone")
@MapKey(name = "timestamp")
@MapKeyTemporal(TemporalType .TIMESTAMP )
private Map<Date, Call> callHistory = new HashMap <>();
@ElementCollection
private List<Date> repairTimestamps = new ArrayList <>( );
@Entity
@Table(name = "phone_call")
public class Call {
@Id
@GeneratedValue
private Long id;
@ManyToOne
private Phone phone;
@Column(name = "call_timestamp")
private Date timestamp;
@Entity
@Inheritance(strategy = InheritanceType .JOINED)
public class Payment {
@Id
@GeneratedValue
private Long id;
@ManyToOne
private Person person;
@Entity
public class CreditCardPayment extends Payment {
}
@Entity
public class WireTransferPayment extends Payment {
}
JAVA
Query query = entityManager.createQuery(
"select p " +
"from Person p " +
"where p.name like :name"
);
Example 468. Obtaining a JPA Query or a TypedQuery reference for a named query
@NamedQuery(
name = "get_person_by_name",
query = "select p from Person p where name = :name"
)
Example 469. Obtaining a Hibernate Query or a TypedQuery reference for a named query
JAVA
@NamedQueries({
@NamedQuery(
name = "get_phone_by_number",
query = "select p " +
"from Phone p " +
"where p.number = :number",
timeout = 1,
readOnly = true
)
})
The Query interface can then be used to control the execution of the query. For example, we may want to specify an execution
timeout or control caching.
JAVA
Query query = entityManager.createQuery(
"select p " +
"from Person p " +
"where p.name like :name" )
// timeout - in milliseconds
.setHint( "javax.persistence.query.timeout", 2000 )
// flush only at commit time
.setFlushMode( FlushModeType .COMMIT );
For complete details, see the Query Javadocs (http://docs.oracle.com/javaee/7/api/javax/persistence/Query.html). Many of the settings
controlling the execution of the query are defined as hints. JPA defines some standard hints (like timeout in the example), but
most are provider specific. Relying on provider specific hints limits your applications portability to some degree.
javax.persistence.query.timeout
javax.persistence.fetchgraph
Defines a fetchgraph EntityGraph. Attributes explicitly specified as AttributeNodes are treated as FetchType.EAGER (via join
fetch or subsequent select). For details, see the EntityGraph discussions in Fetching.
javax.persistence.loadgraph
Defines a loadgraph EntityGraph. Attributes explicitly specified as AttributeNodes are treated as FetchType.EAGER (via join
fetch or subsequent select). Attributes that are not specified are treated as FetchType.LAZY or FetchType.EAGER depending
on the attribute’s definition in metadata. For details, see the EntityGraph discussions in Fetching.
org.hibernate.cacheMode
org.hibernate.cacheable
org.hibernate.cacheRegion
For queries that are cacheable, defines a specific cache region to use. See org.hibernate.query.Query#setCacheRegion .
org.hibernate.comment
org.hibernate.fetchSize
org.hibernate.flushMode
Defines the Hibernate-specific FlushMode to use. See org.hibernate.query.Query#setFlushMode. If possible, prefer using
javax.persistence.Query#setFlushMode instead.
org.hibernate.readOnly
Defines that entities and collections loaded by this query should be marked as read-only. See
org.hibernate.query.Query#setReadOnly
The final thing that needs to happen before the query can be executed is to bind the values for any defined parameters. JPA
defines a simplified set of parameter binding methods. Essentially, it supports setting the parameter value (by name/position) and
a specialized form for Calendar / Date types additionally accepting a TemporalType .
JPQL-style positional parameters are declared using a question mark followed by an ordinal - ?1 , ?2 . The ordinals start with 1.
Just like with named parameters, positional parameters can also appear multiple times in a query.
JAVA
Query query = entityManager.createQuery(
"select p " +
"from Person p " +
"where p.name like ?1" )
.setParameter( 1, "J%" );
In terms of execution, JPA Query offers 2 different methods for retrieving a result set.
Query#getResultList() - executes the select query and returns back the list of results.
Query#getSingleResult() - executes the select query and returns a single result. If there were more than one result an
exception is thrown.
JAVA
List<Person > persons = entityManager.createQuery(
"select p " +
"from Person p " +
"where p.name like :name" )
.setParameter( "name", "J%" )
.getResultList();
JAVA
org.hibernate.query.Query query = session.createQuery(
"select p " +
"from Person p " +
"where p.name like :name"
);
JAVA
org.hibernate.query.Query query = session.getNamedQuery( "get_person_by_name" );
Not only was the JPQL syntax heavily inspired by HQL, but many of the JPA APIs were heavily inspired
by Hibernate too. The two Query contracts are very similar.
The Query interface can then be used to control the execution of the query. For example, we may want to specify an execution
timeout or control caching.
JAVA
org.hibernate.query.Query query = session.createQuery(
"select p " +
"from Person p " +
"where p.name like :name" )
// timeout - in seconds
.setTimeout( 2 )
// write to L2 caches, but do not read from them
.setCacheMode( CacheMode .REFRESH )
// assuming query cache was enabled for the SessionFactory
.setCacheable( true )
// add a comment to the generated SQL if enabled via the hibernate.use_sql_comments configuration property
.setComment( "+ INDEX(p idx_person_name)" );
Query hints here are database query hints. They are added directly to the generated SQL according to
Dialect#getQueryHintString .
The JPA notion of query hints, on the other hand, refer to hints that target the provider (Hibernate).
So even though they are called the same, be aware they have a very different purpose. Also, be aware that
Hibernate query hints generally make the application non-portable across databases unless the code adding
them first checks the Dialect.
Flushing is covered in detail in Flushing. Locking is covered in detail in Locking. The concept of read-only state is covered in
Persistence Contexts.
Hibernate also allows an application to hook into the process of building the query results via the
org.hibernate.transform.ResultTransformer contract. See its Javadocs
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/transform/ResultTransformer.html) as well as the Hibernate-provided
implementations for additional details.
The last thing that needs to happen before we can execute the query is to bind the values for any parameters defined in the query.
Query defines many overloaded methods for this purpose. The most generic form takes the value as well as the Hibernate Type.
JAVA
org.hibernate.query.Query query = session.createQuery(
"select p " +
"from Person p " +
"where p.name like :name" )
.setParameter( "name", "J%", StringType .INSTANCE );
Hibernate generally understands the expected type of the parameter given its context in the query. In the previous example since
we are using the parameter in a LIKE comparison against a String-typed attribute Hibernate would automatically infer the type;
so the above could be simplified.
JAVA
org.hibernate.query.Query query = session.createQuery(
"select p " +
"from Person p " +
"where p.name like :name" )
.setParameter( "name", "J%" );
There are also short hand forms for binding common types such as strings, booleans, integers, etc.
HQL-style positional parameters follow JDBC positional parameter syntax. They are declared using ? without a following ordinal.
There is no way to relate two such positional parameters as being "the same" aside from binding the same value to each.
JAVA
org.hibernate.query.Query query = session.createQuery(
"select p " +
"from Person p " +
"where p.name like ?1" )
.setParameter( 1, "J%" );
This form should be considered deprecated and may be removed in the near future.
In terms of execution, Hibernate offers 4 different methods. The 2 most commonly used are
Query#list - executes the select query and returns back the list of results.
Query#uniqueResult - executes the select query and returns the single result. If there were more than one result an
exception is thrown.
JAVA
List<Person > persons = session.createQuery(
"select p " +
"from Person p " +
"where p.name like :name" )
.setParameter( "name", "J%" )
.list();
If the unique result is used often and the attributes upon which it is based are unique, you may want
to consider mapping a natural-id and using the natural-id loading API. See the Natural Ids for more
information on this topic.
The main form accepts a single argument of type org.hibernate.ScrollMode which indicates the type of scrolling to be
used. See the Javadocs (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/ScrollMode.html) for the details on each.
The second form takes no argument and will use the ScrollMode indicated by Dialect#defaultScrollMode .
Query#scroll returns a org.hibernate.ScrollableResults which wraps the underlying JDBC (scrollable) ResultSet and
provides access to the results. Unlike a typical forward-only ResultSet , the ScrollableResults allows you to navigate the
ResultSet in any direction.
JAVA
try ( ScrollableResults scrollableResults = session.createQuery(
"select p " +
"from Person p " +
"where p.name like :name" )
.setParameter( "name", "J%" )
.scroll()
) {
while (scrollableResults.next()) {
Person person = (Person ) scrollableResults.get()[0];
process(person);
}
}
Since this form holds the JDBC ResultSet open, the application should indicate when it is done with
the ScrollableResults by calling its close() method (as inherited from java.io.Closeable so that
ScrollableResults will work with try-with-resources
If left unclosed by the application, Hibernate will automatically close the underlying resources (e.g. ResultSet
and PreparedStatement ) used internally by the ScrollableResults when the current transaction is ended
(either commit or rollback).
If you plan to use Query#scroll with collection fetches it is important that your query explicitly order
the results so that the JDBC results contain the related rows sequentially.
Hibernate also supports Query#iterate , which is intended for loading entities when it is known that the loaded entries are
already stored in the second-level cache. The idea behind iterate is that just the matching identifiers will be obtained in the SQL
query. From these the identifiers are resolved by second-level cache lookup. If these second-level cache lookups fail, additional
queries will need to be issued against the database.
This operation can perform significantly better for loading large numbers of entities that for certain
already exist in the second-level cache. In cases where many of the entities do not exist in the
second-level cache, this operation will almost definitely perform worse.
Since 5.2, Hibernate offers support for returning a Stream which can be later used to transform the underlying ResultSet .
Internally, the stream() behaves like a Query#scroll and the underlying result is backed by a ScrollableResults .
persons
.map( row -> new PersonNames (
(String ) row[0],
(String ) row[1] ) )
.forEach( this::process );
}
When fetching a single result, like a Person entity, instead of a Stream<Object[]> , Hibernate is going to figure out the actual
type, so the result is a Stream<Person> .
JAVA
try( Stream <Person > persons = session.createQuery(
"select p " +
"from Person p " +
"where p.name like :name" )
.setParameter( "name", "J%" )
.stream() ) {
process(callRegistry);
}
Just like with ScrollableResults , you should always close a Hibernate Stream either explicitly or
using a try-with-resources
(https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceC lose.html) block.
“ Caution should be used when executing bulk update or delete operations because
they m ay result in inconsistencies between the database and the entities in the
active persistence context. In general, bulk update and delete operations should only
be perform ed within a transaction in a new persistence context or before fetching or
accessing entities whose state m ight be affected by such operations.
— Section 4.10 of the JPA 2.0 Specification
SQL
select_statement :: =
[select_clause]
from_clause
[where_clause]
[groupby_clause]
[having_clause]
[orderby_clause]
SQL
List<Person > persons = session.createQuery(
"from Person" )
.list();
The select statement in JPQL is exactly the same as for HQL except that JPQL requires a
select_clause , whereas HQL does not.
SQL
List<Person > persons = entityManager.createQuery(
"select p " +
"from Person p", Person .class )
.getResultList();
Even though HQL does not require the presence of a select_clause , it is generally good practice to include
one. For simple queries the intent is clear and so the intended result of the select_clause is easy to infer. But
on more complex queries that is not always the case.
It is usually better to explicitly specify intent. Hibernate does not actually enforce that a select_clause be
present even when parsing JPQL queries, however, applications interested in JPA portability should take heed of
this.
SQL
update_statement ::=
update_clause [where_clause]
update_clause ::=
UPDATE entity_name [[AS] identification_variable]
SET update_item {, update_item}*
update_item ::=
[identification_variable.]{state_field | single_valued_object_field} = new_value
new_value ::=
scalar_expression | simple_entity_expression | NULL
UPDATE statements, by default, do not affect the version or the timestamp attribute values for the affected entities.
However, you can force Hibernate to set the version or timestamp attribute values through the use of a versioned update .
This is achieved by adding the VERSIONED keyword after the UPDATE keyword.
This is a Hibernate specific feature and will not work in a portable manner.
The int value returned by the executeUpdate() method indicates the number of entities affected by the operation. This may
or may not correlate to the number of rows affected in the database. An HQL bulk operation might result in multiple actual SQL
statements being executed (for joined-subclass, for example). The returned number indicates the number of actual entities
affected by the statement. Using a JOINED inheritance hierarchy, a delete against one of the subclasses may actually result in
deletes against not just the table to which that subclass is mapped, but also the "root" table and tables "in between".
Neither UPDATE nor DELETE statements allow implicit joins. Their form already disallows explicit joins
too.
SQL
delete_statement ::=
delete_clause [where_clause]
delete_clause ::=
DELETE FROM entity_name [[AS] identification_variable]
A DELETE statement is also executed using the executeUpdate() method of either org.hibernate.query.Query or
javax.persistence.Query .
SQL
insert_statement ::=
insert_clause select_statement
insert_clause ::=
INSERT INTO entity_name (attribute_list)
attribute_list ::=
state_field[, state_field ]*
The attribute_list is analogous to the column specification in the SQL INSERT statement. For entities involved in mapped
inheritance, only attributes directly defined on the named entity can be used in the attribute_list . Superclass properties are
not allowed and subclass properties do not make sense. In other words, INSERT statements are inherently non-polymorphic.
select_statement can be any valid HQL select query, with the caveat that the return types must match the types expected by
the insert. Currently, this is checked during query compilation rather than allowing the check to relegate to the database. This
may cause problems between Hibernate Types which are equivalent as opposed to equal. For example, this might cause lead to
issues with mismatches between an attribute mapped as a org.hibernate.type.DateType and an attribute defined as a
org.hibernate.type.TimestampType , even though the database might not make a distinction or might be able to handle the
conversion.
For the id attribute, the insert statement gives you two options. You can either explicitly specify the id property in the
attribute_list , in which case its value is taken from the corresponding select expression, or omit it from the attribute_list
in which case a generated value is used. This latter option is only available when using id generators that operate "in the
database"; attempting to use this option with any "in memory" type generators will cause an exception during parsing.
For optimistic locking attributes, the insert statement again gives you two options. You can either specify the attribute in the
attribute_list in which case its value is taken from the corresponding select expressions or omit it from the attribute_list
in which case the seed value defined by the corresponding org.hibernate.type.VersionType is used.
SQL
int insertedEntities = session.createQuery(
"insert into Partner (id, name) " +
"select p.id, p.name " +
"from Person p ")
.executeUpdate();
In most cases declaring an identification variable is optional, though it is usually good practice to declare them.
An identification variable must follow the rules for Java identifier validity.
According to JPQL, identification variables must be treated as case-insensitive. Good practice says you should use the same case
throughout a query to refer to a given identification variable. In other words, JPQL says they can be case-insensitive and so
Hibernate must be able to treat them as such, but this does not make it good practice.
SQL
root_entity_reference ::=
entity_name [AS] identification_variable
JAVA
List<Person > persons = entityManager.createQuery(
"select p " +
"from org.hibernate.userguide.model.Person p", Person .class )
.getResultList();
We see that the query is defining a root entity reference to the org.hibernate.userguide.model.Person object model type.
Additionally, it declares an alias of p to that org.hibernate.userguide.model.Person reference, which is the identification
variable.
Usually, the root entity reference represents just the entity name rather than the entity class FQN (fully-qualified name). By
default, the entity name is the unqualified entity class name, here Person
Example 490. Simple query using entity name for root entity reference
JAVA
List<Person > persons = entityManager.createQuery(
"select p " +
"from Person p", Person .class )
.getResultList();
Multiple root entity references can also be specified, even when naming the same entity.
SQL
List<Object []> persons = entityManager.createQuery(
"select distinct pr, ph " +
"from Person pr, Phone ph " +
"where ph.person = pr and ph is not null", Object [].class )
.getResultList();
JAVA
List<Person > persons = entityManager.createQuery(
"select distinct pr1 " +
"from Person pr1, Person pr2 " +
"where pr1.id <> pr2.id " +
" and pr1.address = pr2.address " +
" and pr1.createdOn < pr2.createdOn", Person .class )
.getResultList();
SQL
List<Person > persons = entityManager.createQuery(
"select distinct pr " +
"from Person pr " +
"join pr.phones ph " +
"where ph.type = :phoneType", Person .class )
.setParameter( "phoneType", PhoneType .MOBILE )
.getResultList();
// functionally the same query but using the 'left outer' phrase
List<Person > persons = entityManager.createQuery(
"select distinct pr " +
"from Person pr " +
"left outer join pr.phones ph " +
"where ph is null " +
" or ph.type = :phoneType", Person .class )
.setParameter( "phoneType", PhoneType .LAND_LINE )
.getResultList();
This is specific to HQL. JPQL defines the ON clause for this feature.
JAVA
List<Object []> personsAndPhones = session.createQuery(
"select pr.name, ph.number " +
"from Person pr " +
"left join pr.phones ph with ph.type = :phoneType " )
.setParameter( "phoneType", PhoneType .LAND_LINE )
.list();
JAVA
List<Object []> personsAndPhones = entityManager.createQuery(
"select pr.name, ph.number " +
"from Person pr " +
"left join pr.phones ph on ph.type = :phoneType " )
.setParameter( "phoneType", PhoneType .LAND_LINE )
.getResultList();
The important distinction is that in the generated SQL the conditions of the WITH/ON clause are made
part of the ON clause in the generated SQL, as opposed to the other queries in this section where the
HQL/JPQL conditions are made part of the WHERE clause in the generated SQL.
The distinction in this specific example is probably not that significant. The with clause is sometimes necessary for more
complicated queries.
Explicit joins may reference association or component/embedded attributes. In the case of component/embedded attributes, the
join is simply logical and does not correlate to a physical (SQL) join. For further information about collection-valued association
references, see Collection member references.
An important use case for explicit joins is to define FETCH JOINS which override the laziness of the joined association. As an
example, given an entity named Person with a collection-valued association named phones , the JOIN FETCH will also load the
child collection in the same SQL query:
JAVA
// functionally the same query but using the 'left outer' phrase
List<Person > persons = entityManager.createQuery(
"select distinct pr " +
"from Person pr " +
"left join fetch pr.phones ", Person .class )
.getResultList();
As you can see from the example, a fetch join is specified by injecting the keyword fetch after the keyword join . In the
example, we used a left outer join because we also wanted to return customers who have no orders.
Inner joins can also be fetched, but inner joins filter out the root entity. In the example, using an inner join instead would have
resulted in customers without any orders being filtered out of the result.
Care should be taken when fetch joining a collection-valued association which is in any way further
restricted (the fetched collection will be restricted too). For this reason, it is usually considered best
practice not to assign an identification variable to fetched joins except for the purpose of specifying nested fetch
joins.
Fetch joins should not be used in paged queries (e.g. setFirstResult() or setMaxResults() ), nor should they
be used with the scroll() or iterate() features.
expressions.
JAVA
List<Phone > phones = entityManager.createQuery(
"select ph " +
"from Phone ph " +
"where ph.person.address = :address ", Phone .class )
.setParameter( "address", address )
.getResultList();
// same as
List<Phone > phones = entityManager.createQuery(
"select ph " +
"from Phone ph " +
"join ph.person pr " +
"where pr.address = :address ", Phone .class )
.setParameter( "address", address)
.getResultList();
An implicit join always starts from an identification variable , followed by the navigation operator ( . ), followed by an
attribute for the object model type referenced by the initial identification variable . In the example, the initial
identification variable is ph which refers to the Phone entity. The ph.person reference then refers to the person
attribute of the Phone entity. person is an association type so we further navigate to its age attribute.
As shown in the example, implicit joins can appear outside the FROM clause . However, they affect the FROM clause .
Multiple references to the same implicit join always refer to the same logical and physical (SQL) join.
//same as
List<Phone > phones = entityManager.createQuery(
"select ph " +
"from Phone ph " +
"inner join ph.person pr " +
"where pr.address = :address " +
" and pr.createdOn > :timestamp", Phone .class )
.setParameter( "address", address )
.setParameter( "timestamp", timestamp )
.getResultList();
Just as with explicit joins, implicit joins may reference association or component/embedded attributes. For further information
about collection-valued association references, see Collection member references.
In the case of component/embedded attributes, the join is simply logical and does not correlate to a physical (SQL) join. Unlike
explicit joins, however, implicit joins may also reference basic state fields as long as the path expression ends there.
15.16. Distinct
For JPQL and HQL, DISTINCT has two meanings:
1. It can be passed to the database so that duplicates are removed from a result set
2. It can be used to filter out the same parent entity references when join fetching a child collection
JAVA
List<String > lastNames = entityManager.createQuery(
"select distinct p.lastName " +
"from Person p", String .class )
.getResultList();
When running the query above, Hibernate generates the following SQL query:
SQL
SELECT DISTINCT
p.last_name as col_0_0_
FROM person p
For this particular use case, passing the DISTINCT keyword from JPQL/HQL to the database is the right thing to do.
JAVA
List<Person > authors = entityManager.createQuery(
"select distinct p " +
"from Person p " +
"left join fetch p.books", Person .class )
.getResultList();
In this case, DISTINCT is used because there can be multiple Books entities associated with a given Person . If in the database
there are 3 Persons in the database and each person has 2 Books , without DISTINCT this query will return 6 Persons since
the SQL-level result-set size is given by the number of joined Book records.
SQL
SELECT DISTINCT
p.id as id1_1_0_,
b.id as id1_0_1_,
p.first_name as first_na2_1_0_,
p.last_name as last_nam3_1_0_,
b.author_id as author_i3_0_1_,
b.title as title2_0_1_,
b.author_id as author_i3_0_0__,
b.id as id1_0_0__
FROM person p
LEFT OUTER JOIN book b ON p.id=b.author_id
In this case, the DISTINCT SQL keyword is undesirable since it does a redundant result set sorting, as explained in this blog post
(http://in.relation.to/2016/08/04/introducing-distinct-pass-through-query-hint/). To fix this issue, Hibernate 5.2.2 added support for the
HINT_PASS_DISTINCT_THROUGH entity query hint:
JAVA
List<Person > authors = entityManager.createQuery(
"select distinct p " +
"from Person p " +
"left join fetch p.books", Person .class )
.setHint( QueryHints .HINT_PASS_DISTINCT_THROUGH, false )
.getResultList();
With this entity query hint, Hibernate will not pass the DISTINCT keyword to the SQL query:
SELECT
p.id as id1_1_0_,
b.id as id1_0_1_,
p.first_name as first_na2_1_0_,
p.last_name as last_nam3_1_0_,
b.author_id as author_i3_0_1_,
b.title as title2_0_1_,
b.author_id as author_i3_0_0__,
b.id as id1_0_0__
FROM person p
LEFT OUTER JOIN book b ON p.id=b.author_id
When using the HINT_PASS_DISTINCT_THROUGH entity query hint, Hibernate can still remove the duplicated parent-side entities
from the query result.
JAVA
List<Phone > phones = entityManager.createQuery(
"select ph " +
"from Person pr " +
"join pr.phones ph " +
"join ph.calls c " +
"where pr.address = :address " +
" and c.duration > :duration", Phone .class )
.setParameter( "address", address )
.setParameter( "duration", duration )
.getResultList();
// alternate syntax
List<Phone > phones = session.createQuery(
"select ph " +
"from Person pr, " +
"in (pr.phones) ph, " +
"in (ph.calls) c " +
"where pr.address = :address " +
" and c.duration > :duration" )
.setParameter( "address", address )
.setParameter( "duration", duration )
.list();
In the example, the identification variable ph actually refers to the object model type Phone , which is the type of the elements of
the Person#phones association.
The example also shows the alternate syntax for specifying collection association joins using the IN syntax. Both forms are
equivalent. Which form an application chooses to use is simply a matter of taste.
JAVA
@OneToMany(mappedBy = "phone")
@MapKey(name = "timestamp")
@MapKeyTemporal(TemporalType .TIMESTAMP )
private Map<Date, Call> callHistory = new HashMap <>();
// select all the calls (the map value) for a given Phone
List<Call> calls = entityManager.createQuery(
"select ch " +
"from Phone ph " +
"join ph.callHistory ch " +
"where ph.id = :id ", Call.class )
.setParameter( "id", id )
.getResultList();
// same as above
List<Call> calls = entityManager.createQuery(
"select value(ch) " +
"from Phone ph " +
"join ph.callHistory ch " +
"where ph.id = :id ", Call.class )
.setParameter( "id", id )
.getResultList();
// select all the Call timestamps (the map key) for a given Phone
List<Date> timestamps = entityManager.createQuery(
"select key(ch) " +
"from Phone ph " +
"join ph.callHistory ch " +
"where ph.id = :id ", Date.class )
.setParameter( "id", id )
.getResultList();
// select all the Call and their timestamps (the 'Map.Entry') for a given Phone
List<Map.Entry <Date, Call>> callHistory = entityManager.createQuery(
"select entry(ch) " +
"from Phone ph " +
"join ph.callHistory ch " +
"where ph.id = :id " )
.setParameter( "id", id )
.getResultList();
VALUE
Refers to the collection value. Same as not specifying a qualifier. Useful to explicitly show intent. Valid for any type of
collection-valued reference.
INDEX
According to HQL rules, this is valid for both Maps and Lists which specify a javax.persistence.OrderColumn annotation
to refer to the Map key or the List position (aka the OrderColumn value). JPQL however, reserves this for use in the List
case and adds KEY for the Map case. Applications interested in JPA provider portability should be aware of this distinction.
KEY
Valid only for Maps . Refers to the map’s key. If the key is itself an entity, it can be further navigated.
ENTRY
Only valid for Maps . Refers to the map’s logical java.util.Map.Entry tuple (the combination of its key and value). ENTRY is
only valid as a terminal path and it’s applicable to the SELECT clause only.
15.19. Polymorphism
HQL and JPQL queries are inherently polymorphic.
JAVA
List<Payment > payments = entityManager.createQuery(
"select p " +
"from Payment p ", Payment .class )
.getResultList();
This query names the Payment entity explicitly. However, all subclasses of Payment are also available to the query. So, if the
CreditCardPayment and WireTransferPayment entities extend the Payment class, all three types would be available to the
entity query, and the query would return instances of all three.
by using either the org.hibernate.annotations.Polymorphism annotation (global, and Hibernate-specific). See the
@Polymorphism section for more info about this use case.
The HQL query from java.lang.Object is totally valid (although not very practical from a performance
perspective)!
It returns every object of every entity type defined by your application mappings.
15.20. Expressions
Essentially, expressions are references that resolve to basic or tuple values.
15.23. Literals
String literals are enclosed in single quotes. To escape a single quote within a string literal, use double single quotes.
JAVA
List<Person > persons = entityManager.createQuery(
"select p " +
"from Person p " +
"where p.name like 'Joe'", Person .class )
.getResultList();
// Escaping quotes
List<Person > persons = entityManager.createQuery(
"select p " +
"from Person p " +
"where p.name like 'Joe''s'", Person .class )
.getResultList();
// decimal notation
List<Call> calls = entityManager.createQuery(
"select c " +
"from Call c " +
"where c.duration > 100.5", Call.class )
.getResultList();
// scientific notation
List<Call> calls = entityManager.createQuery(
"select c " +
"from Call c " +
"where c.duration > 1e+2", Call.class )
.getResultList();
Specific typing can be achieved through the use of the same suffix approach specified by Java. So, L
denotes a long, D denotes a double, F denotes a float. The actual suffix is case-insensitive.
Enums can even be referenced as literals. The fully-qualified enum class name must be used. HQL can also
handle constants in the same manner, though JPQL does not define that as being supported.
These Date/time literals only work if the underlying JDBC driver supports them.
15.24. Arithmetic
Arithmetic operations also represent valid expressions.
JAVA
// select clause date/time arithmetic operations
Long duration = entityManager.createQuery(
"select sum(ch.duration) * :multiplier " +
"from Person pr " +
"join pr.phones ph " +
"join ph.callHistory ch " +
"where ph.id = 1L ", Long.class )
.setParameter( "multiplier", 1000L )
.getSingleResult();
else, if either operand is BigInteger , the result is BigInteger (except for division, in which case the result type is not
further defined)
else, if either operand is Long / long , the result is Long (except for division, in which case the result type is not further
defined)
else, (the assumption being that both operands are of integral type) the result is Integer (except for division, in which case
the result type is not further defined)
Date arithmetic is also supported, albeit in a more limited fashion. This is due to differences in database support and partly to the
lack of support for INTERVAL definition in the query language itself.
JAVA
String name = entityManager.createQuery(
"select 'Customer ' || p.name " +
"from Person p " +
"where p.id = 1", String .class )
.getSingleResult();
AVG
MIN
MAX
SUM
The result type of the SUM() function depends on the type of the values being summed. For integral values (other than
BigInteger ), the result type is Long .
For floating point values (other than BigDecimal ) the result type is Double . For BigInteger values, the result type is
BigInteger . For BigDecimal values, the result type is BigDecimal .
JAVA
Object [] callStatistics = entityManager.createQuery(
"select " +
" count(c), " +
" sum(c.duration), " +
" min(c.duration), " +
" max(c.duration), " +
" avg(c.duration) " +
"from Call c ", Object [].class )
.getSingleResult();
Aggregations often appear with grouping. For information on grouping see Group by.
CONCAT
String concatenation function. Variable argument length of 2 or more string values to be concatenated together.
JAVA
List<String > callHistory = entityManager.createQuery(
"select concat( p.number, ' : ' , cast(c.duration as string) ) " +
"from Call c " +
"join c.phone p", String .class )
.getResultList();
SUBSTRING
Extracts a portion of a string value. The second argument denotes the starting position, where 1 is the first character of the
string. The third (optional) argument denotes the length.
JAVA
List<String > prefixes = entityManager.createQuery(
"select substring( p.number, 1, 2 ) " +
"from Call c " +
"join c.phone p", String .class )
.getResultList();
UPPER
Upper cases the specified string
JAVA
List<String > names = entityManager.createQuery(
"select upper( p.name ) " +
"from Person p ", String .class )
.getResultList();
LOWER
Lower cases the specified string
JAVA
List<String > names = entityManager.createQuery(
"select lower( p.name ) " +
"from Person p ", String .class )
.getResultList();
TRIM
Follows the semantics of the SQL trim function.
JAVA
List<String > names = entityManager.createQuery(
"select trim( p.name ) " +
"from Person p ", String .class )
.getResultList();
LENGTH
Returns the length of a string.
JAVA
List<Integer > lengths = entityManager.createQuery(
"select length( p.name ) " +
"from Person p ", Integer .class )
.getResultList();
LOCATE
Locates a string within another string. The third argument (optional) is used to denote a position from which to start looking.
JAVA
List<Integer > sizes = entityManager.createQuery(
"select locate( 'John', p.name ) " +
"from Person p ", Integer .class )
.getResultList();
ABS
Calculates the mathematical absolute value of a numeric value.
JAVA
List<Integer > abs = entityManager.createQuery(
"select abs( c.duration ) " +
"from Call c ", Integer .class )
.getResultList();
MOD
Calculates the remainder of dividing the first argument by the second.
JAVA
List<Integer > mods = entityManager.createQuery(
"select mod( c.duration, 10 ) " +
"from Call c ", Integer .class )
.getResultList();
SQRT
Calculates the mathematical square root of a numeric value.
JAVA
List<Double > sqrts = entityManager.createQuery(
"select sqrt( c.duration ) " +
"from Call c ", Double .class )
.getResultList();
CURRENT_DATE
Returns the database current date.
JAVA
List<Call> calls = entityManager.createQuery(
"select c " +
"from Call c " +
"where c.timestamp = current_date", Call.class )
.getResultList();
CURRENT_TIME
JAVA
List<Call> calls = entityManager.createQuery(
"select c " +
"from Call c " +
"where c.timestamp = current_time", Call.class )
.getResultList();
CURRENT_TIMESTAMP
Returns the database current timestamp.
JAVA
List<Call> calls = entityManager.createQuery(
"select c " +
"from Call c " +
"where c.timestamp = current_timestamp", Call.class )
.getResultList();
BIT_LENGTH
Returns the length of binary data.
JAVA
List<Number > bits = entityManager.createQuery(
"select bit_length( c.duration ) " +
"from Call c ", Number .class )
.getResultList();
CAST
Performs a SQL cast. The cast target should name the Hibernate mapping type to use. See the data types chapter on for more
information.
JAVA
List<String > durations = entityManager.createQuery(
"select cast( c.duration as string ) " +
"from Call c ", String .class )
.getResultList();
EXTRACT
Performs a SQL extraction on datetime values. An extraction extracts parts of the datetime (the year, for example).
YEAR
Abbreviated extract form for extracting the year.
JAVA
List<Integer > years = entityManager.createQuery(
"select year( c.timestamp ) " +
"from Call c ", Integer .class )
.getResultList();
MONTH
Abbreviated extract form for extracting the month.
DAY
Abbreviated extract form for extracting the day.
HOUR
Abbreviated extract form for extracting the hour.
MINUTE
Abbreviated extract form for extracting the minute.
SECOND
Abbreviated extract form for extracting the second.
STR
Abbreviated form for casting a value as character data.
JAVA
List<String > timestamps = entityManager.createQuery(
"select str( c.timestamp ) " +
"from Call c ", String .class )
.getResultList();
List<String > timestamps = entityManager.createQuery(
"select str( cast(duration as float) / 60, 4, 2 ) " +
"from Call c ", String .class )
.getResultList();
be available when using that database Dialect. Applications that aim for database portability should avoid using functions in this
category.
Application developers can also supply their own set of functions. This would usually represent either custom SQL functions or
aliases for snippets of SQL. Such function declarations are made by using the addSqlFunction() method of
org.hibernate.cfg.Configuration .
SIZE
Calculate the size of a collection. Equates to a subquery!
MAXELEMENT
Available for use on collections of basic type. Refers to the maximum value as determined by applying the max SQL
aggregation.
MAXINDEX
Available for use on indexed collections. Refers to the maximum index (key/position) as determined by applying the max SQL
aggregation.
MINELEMENT
Available for use on collections of basic type. Refers to the minimum value as determined by applying the min SQL
aggregation.
MININDEX
Available for use on indexed collections. Refers to the minimum index (key/position) as determined by applying the min SQL
aggregation.
ELEMENTS
Used to refer to the elements of a collection as a whole. Only allowed in the where clause. Often used in conjunction with ALL ,
ANY or SOME restrictions.
INDICES
Similar to elements except that the indices expression refers to the collections indices (keys/positions) as a whole.
Elements of indexed collections (arrays, lists, and maps) can be referred to by index operator.
JAVA
// indexed lists
List<Person > persons = entityManager.createQuery(
"select p " +
"from Person p " +
"where p.phones[ 0 ].type = 'LAND_LINE'", Person .class )
.getResultList();
// maps
List<Person > persons = entityManager.createQuery(
"select p " +
"from Person p " +
"where p.addresses[ 'HOME' ] = :address", Person .class )
.setParameter( "address", address)
.getResultList();
See also Special case - qualified path expressions as there is a good deal of overlap.
JAVA
List<Payment > payments = entityManager.createQuery(
"select p " +
"from Payment p " +
"where type(p) = CreditCardPayment", Payment .class )
.getResultList();
List<Payment > payments = entityManager.createQuery(
"select p " +
"from Payment p " +
"where type(p) = :type", Payment .class )
.setParameter( "type", WireTransferPayment .class )
.getResultList();
HQL also has a legacy form of referring to an entity type, though that legacy form is considered
deprecated in favor of TYPE . The legacy form would have used p.class in the examples rather than
type(p) . It is mentioned only for completeness.
JAVA
CASE {operand} WHEN {test_value} THEN {match_result} ELSE {miss_result} END
JAVA
List<String > nickNames = entityManager.createQuery(
"select " +
" case p.nickName " +
" when 'NA' " +
" then '<no nick name>' " +
" else p.nickName " +
" end " +
"from Person p", String .class )
.getResultList();
// same as above
List<String > nickNames = entityManager.createQuery(
"select coalesce(p.nickName, '<no nick name>') " +
"from Person p", String .class )
.getResultList();
JAVA
CASE [ WHEN {test_conditional} THEN {match_result} ]* ELSE {miss_result} END
JAVA
List<String > nickNames = entityManager.createQuery(
"select nullif( p.nickName, p.name ) " +
"from Person p", String .class )
.getResultList();
There is a particular expression type that is only valid in the select clause. Hibernate calls this "dynamic instantiation". JPQL
supports some of that feature and calls it a "constructor expression".
So rather than dealing with the Object[] (again, see Hibernate Query API) here, we are wrapping the values in a type-safe Java
object that will be returned as the results of the query.
JAVA
public class CallStatistics {
public CallStatistics (long count, long total, int min, int max, double avg) {
this.count = count;
this.total = total;
this.min = min;
this.max = max;
this.avg = avg;
}
The class reference must be fully qualified and it must have a matching constructor.
The class here need not be mapped. If it does represent an entity, the resulting instances are
returned in the NEW state (not managed!).
HQL supports additional "dynamic instantiation" features. First, the query can specify to return a List rather than an Object[]
for scalar results:
JAVA
List<Map> phoneCallTotalDurations = entityManager.createQuery(
"select new map(" +
" p.number as phoneNumber , " +
" sum(c.duration) as totalDuration, " +
" avg(c.duration) as averageDuration " +
") " +
"from Call c " +
"join c.phone p " +
"group by p.number ", Map.class )
.getResultList();
The results from this query will be a List<Map<String, Object>> as opposed to a List<Object[]> . The keys of the map are
defined by the aliases given to the select expressions. If the user doesn’t assign aliases, the key will be the index of each particular
result set column (e.g. 0, 1, 2, etc).
15.39. Predicates
Predicates form the basis of the where clause, the having clause and searched case expressions. They are expressions which
resolve to a truth value, generally TRUE or FALSE , although boolean comparisons involving NULL resolve typically to UNKNOWN .
// numeric comparison
List<Call> calls = entityManager.createQuery(
"select c " +
"from Call c " +
"where c.duration < 30 ", Call.class )
.getResultList();
// string comparison
List<Person > persons = entityManager.createQuery(
"select p " +
"from Person p " +
"where p.name like 'John%' ", Person .class )
.getResultList();
// datetime comparison
List<Person > persons = entityManager.createQuery(
"select p " +
"from Person p " +
"where p.createdOn > '1950-01-01' ", Person .class )
.getResultList();
// enum comparison
List<Phone > phones = entityManager.createQuery(
"select p " +
"from Phone p " +
"where p.type = 'MOBILE' ", Phone .class )
.getResultList();
// boolean comparison
List<Payment > payments = entityManager.createQuery(
"select p " +
"from Payment p " +
"where p.completed = true ", Payment .class )
.getResultList();
// boolean comparison
List<Payment > payments = entityManager.createQuery(
"select p " +
"from Payment p " +
"where type(p) = WireTransferPayment ", Payment .class )
.getResultList();
Comparisons can also involve subquery qualifiers: ALL , ANY , SOME . SOME and ANY are synonymous.
The ALL qualifier resolves to true if the comparison is true for all of the values in the result of the subquery. It resolves to false if
the subquery result is empty.
The ANY / SOME qualifier resolves to true if the comparison is true for some of (at least one of) the values in the result of the
subquery. It resolves to false if the subquery result is empty.
JAVA
// select all persons with a nickname
List<Person > persons = entityManager.createQuery(
"select p " +
"from Person p " +
"where p.nickName is not null", Person .class )
.getResultList();
JAVA
like_expression ::=
string_expression
[NOT] LIKE pattern_value
[ESCAPE escape_character]
The semantics follow that of the SQL like expression. The pattern_value is the pattern to attempt to match in the
string_expression . Just like SQL, pattern_value can use _ and % as wildcards. The meanings are the same. The _ symbol
matches any single character and % matches any number of characters.
The optional escape 'escape character' is used to specify an escape character used to escape the special meaning of _ and %
in the pattern_value . This is useful when needing to search on patterns including either _ or % .
The syntax is formed as follows: 'like_predicate' escape 'escape_symbol' So, if | is the escape symbol and we want to
match all stored procedures prefixed with Dr_ , the like criteria becomes: 'Dr|_%' escape '|' :
JAVA
// find any person with a name starting with "Dr_"
List<Person > persons = entityManager.createQuery(
"select p " +
"from Person p " +
"where p.name like 'Dr|_%' escape '|'", Person .class )
.getResultList();
15.44. In predicate
IN predicates performs a check that a particular value is in a list of values. Its syntax is:
JAVA
in_expression ::=
single_valued_expression [NOT] IN single_valued_list
single_valued_list ::=
constructor_expression | (subquery) | collection_valued_input_parameter
The types of the single_valued_expression and the individual values in the single_valued_list must be consistent.
JPQL limits the valid types here to string, numeric, date, time, timestamp, and enum types, and, in JPQL,
single_valued_expression can only refer to:
"state fields", which is its term for simple attributes. Specifically, this excludes association and component/embedded
attributes.
In HQL, single_valued_expression can refer to a far more broad set of expression types. Single-valued association are
allowed, and so are component/embedded attributes, although that feature depends on the level of support for tuple or "row
value constructor syntax" in the underlying database. Additionally, HQL does not limit the value type in any way, though
application developers should be aware that different types may incur limited support based on the underlying database vendor.
This is largely the reason for the JPQL limitations.
The list of values can come from a number of different sources. In the constructor_expression and
collection_valued_input_parameter , the list of values must not be empty; it must contain at least one value.
JAVA
List<Payment > payments = entityManager.createQuery(
"select p " +
"from Payment p " +
"where type(p) in ( CreditCardPayment, WireTransferPayment )", Payment .class )
.getResultList();
JAVA
List<Person > persons = entityManager.createQuery(
"select p " +
"from Person p " +
"where p.phones is empty", Person .class )
.getResultList();
JAVA
List<Person > persons = entityManager.createQuery(
"select p " +
"from Person p " +
"where 'Home address' member of p.addresses", Person .class )
.getResultList();
If the predicate is true, NOT resolves to false. If the predicate is unknown (e.g. NULL ), the NOT
resolves to unknown as well.
15.52. Group by
The GROUP BY clause allows building aggregated results for various value groups. As an example, consider the following queries:
JAVA
Long totalDuration = entityManager.createQuery(
"select sum( c.duration ) " +
"from Call c ", Long.class )
.getSingleResult();
The first query retrieves the complete total of all orders. The second retrieves the total for each customer, grouped by each
customer.
In a grouped query, the where clause applies to the non-aggregated values (essentially it determines whether rows will make it
into the aggregation). The HAVING clause also restricts results, but it operates on the aggregated values. In the Group by example,
we retrieved Call duration totals for all persons. If that ended up being too much data to deal with, we might want to restrict the
results to focus only on customers with a summed total of more than 1000:
JAVA
List<Object []> personTotalCallDurations = entityManager.createQuery(
"select p.name, sum( c.duration ) " +
"from Call c " +
"join c.phone ph " +
"join ph.person p " +
"group by p.name " +
"having sum( c.duration ) > 1000", Object [].class )
.getResultList();
The HAVING clause follows the same rules as the WHERE clause and is also made up of predicates. HAVING is applied after the
groupings and aggregations have been done, while the WHERE clause is applied before.
15.53. Order by
The results of the query can also be ordered. The ORDER BY clause is used to specify the selected values to be used to order the
result. The types of expressions considered valid as part of the ORDER BY clause include:
state fields
component/embeddable attributes
identification variable declared in the select clause for any of the previous expression types
Additionally, JPQL says that all values referenced in the ORDER BY clause must be named in the SELECT clause. HQL does not
mandate that restriction, but applications desiring database portability should be aware that not all databases support
referencing values in the ORDER BY clause that are not referenced in the select clause.
Individual expressions in the order-by can be qualified with either ASC (ascending) or DESC (descending) to indicate the desired
ordering direction. Null values can be placed in front or at the end of the sorted set using NULLS FIRST or NULLS LAST clause
respectively.
JAVA
List<Person > persons = entityManager.createQuery(
"select p " +
"from Person p " +
"order by p.name", Person .class )
.getResultList();
Read-only entities are skipped by the dirty checking mechanism as illustrated by the following example:
SQL
SELECT c.id AS id1_5_ ,
c.duration AS duration2_5_ ,
c.phone_id AS phone_id4_5_ ,
c.call_timestamp AS call_tim3_5_
FROM phone_call c
INNER JOIN phone p ON c.phone_id = p.id
WHERE p.phone_number = '123-456-7890'
You can also pass the read-only hint to named queries using the JPA @QueryHint
(http://docs.oracle.com/javaee/7/api/javax/persistence/QueryHint.html) annotation.
Example 531. Fetching read-only entities using a named query and the read-only hint
JAVA
@NamedQuery(
name = "get_read_only_person_by_name",
query = "select p from Person p where name = :name",
hints = {
@QueryHint(
name = "org.hibernate.readOnly",
value = "true"
)
}
)
The Hibernate native API offers a Query#setReadOnly method, as an alternative to using a JPA query hint:
JAVA
List<Call> calls = entityManager.createQuery(
"select c " +
"from Call c " +
"join c.phone p " +
"where p.number = :phoneNumber ", Call.class )
.setParameter( "phoneNumber", "123-456-7890" )
.unwrap( org.hibernate.query.Query .class )
.setReadOnly( true )
.getResultList();
16. Criteria
Criteria queries offer a type-safe alternative to HQL, JPQL and native SQL queries.
This chapter will focus on the JPA APIs for declaring type-safe criteria queries.
Criteria queries are a programmatic, type-safe way to express a query. They are type-safe in terms of using interfaces and classes
to represent various structural parts of a query such as the query itself, the select clause, or an order-by, etc. They can also be
type-safe in terms of referencing attributes as we will see in a bit. Users of the older Hibernate org.hibernate.Criteria query
API will recognize the general approach, though we believe the JPA API to be superior as it represents a clean look at the lessons
learned from that API.
Criteria queries are essentially an object graph, where each part of the graph represents an increasing (as we navigate down this
graph) more atomic part of the query. The first step in performing a criteria query is building this graph. The
javax.persistence.criteria.CriteriaBuilder interface is the first thing with which you need to become acquainted with
begin using criteria queries. Its role is that of a factory for all the individual pieces of the criteria. You obtain a
javax.persistence.criteria.CriteriaBuilder instance by calling the getCriteriaBuilder() method of either
javax.persistence.EntityManagerFactory or javax.persistence.EntityManager .
The next step is to obtain a javax.persistence.criteria.CriteriaQuery . This is accomplished using one of the three methods
on javax.persistence.criteria.CriteriaBuilder for this purpose:
CriteriaQuery<Tuple> createTupleQuery()
CriteriaQuery<Object> createQuery()
Each serves a different purpose depending on the expected type of the query results.
Chapter 6 Criteria API of the JPA Specification already contains a decent amount of reference material
pertaining to the various parts of a criteria query. So rather than duplicate all that content here, let’s
instead look at some of the more widely anticipated usages of the API.
JAVA
CriteriaBuilder builder = entityManager.getCriteriaBuilder();
The example uses createQuery() passing in the Person class reference as the results of the query will be Person objects.
The call to the CriteriaQuery#select method in this example is unnecessary because root will be the
implied selection since we have only a single query root. It was done here only for completeness of an
example.
The Person_.name reference is an example of the static form of JPA Metamodel reference. We will use that form
exclusively in this chapter. See the documentation for the Hibernate JPA Metamodel Generator
(https://docs.jboss.org/hibernate/orm/5.3/topical/html_single/metamodelgen/MetamodelGenerator.html) for additional
details on the JPA static Metamodel.
JAVA
CriteriaBuilder builder = entityManager.getCriteriaBuilder();
In this example, the query is typed as java.lang.String because that is the anticipated type of the results (the type of the
Person#nickName attribute is java.lang.String ). Because a query might contain multiple references to the Person entity,
attribute references always need to be qualified. This is accomplished by the Root#get method call.
There are actually a few different ways to select multiple values using criteria queries. We will explore two options here, but an
alternative recommended approach is to use tuples as described in Tuple criteria queries, or consider a wrapper query, see
Selecting a wrapper for details.
JAVA
CriteriaBuilder builder = entityManager.getCriteriaBuilder();
Technically this is classified as a typed query, but you can see from handling the results that this is sort of misleading. Anyway, the
expected result type here is an array.
The example then uses the array method of javax.persistence.criteria.CriteriaBuilder which explicitly combines
individual selections into a javax.persistence.criteria.CompoundSelection .
JAVA
CriteriaBuilder builder = entityManager.getCriteriaBuilder();
Just as we saw in Selecting an array we have a typed criteria query returning an Object array. Both queries are functionally
equivalent. This second example uses the multiselect() method which behaves slightly differently based on the type given
when the criteria query was first built, but, in this case, it says to select and return an Object[].
First, we see the simple definition of the wrapper object we will be using to wrap our result values. Specifically, notice the
constructor and its argument types. Since we will be returning PersonWrapper objects, we use PersonWrapper as the type of
our criteria query.
This example illustrates the use of the javax.persistence.criteria.CriteriaBuilder method construct which is used to
build a wrapper expression. For every row in the result, we are saying we would like a PersonWrapper instantiated with the
remaining arguments by the matching constructor. This wrapper expression is then passed as the select.
This example illustrates accessing the query results through the javax.persistence.Tuple interface. The example uses the
explicit createTupleQuery() of javax.persistence.criteria.CriteriaBuilder . An alternate approach is to use
createQuery( Tuple.class ) .
Again we see the use of the multiselect() method, just like in Selecting an array using multiselect . The difference here is
that the type of the javax.persistence.criteria.CriteriaQuery was defined as javax.persistence.Tuple so the
compound selections, in this case, are interpreted to be the tuple elements.
The javax.persistence.Tuple contract provides three forms of access to the underlying elements:
typed
The Selecting a tuple example illustrates this form of access in the tuple.get( idPath ) and tuple.get( nickNamePath )
calls. This allows typed access to the underlying tuple values based on the javax.persistence.TupleElement expressions
used to build the criteria.
positional
Allows access to the underlying tuple values based on the position. The simple Object get(int position) form is very similar to the
access illustrated in Selecting an array and Selecting an array using multiselect . The <X> X get(int position, Class<X> type
form allows typed positional access, but based on the explicitly supplied type which the tuple value must be type-assignable to.
aliased
Allows access to the underlying tuple values based an (optionally) assigned alias. The example query did not apply an alias. An
alias would be applied via the alias method on javax.persistence.criteria.Selection . Just like positional access, there
is both a typed (Object get(String alias)) and an untyped (<X> X get(String alias, Class<X> type form.
A CriteriaQuery object defines a query over one or more entity, embeddable, or basic abstract schema types. The root
objects of the query are entities, from which the other types are reached by navigation.
All the individual parts of the FROM clause (roots, joins, paths) implement the
javax.persistence.criteria.From interface.
16.8. Roots
Roots define the basis from which all joins, paths and attributes are available in the query. A root is always an entity type. Roots
are defined and added to the criteria by the overloaded from methods on javax.persistence.criteria.CriteriaQuery :
JAVA
<X> Root<X> from( Class<X> );
JAVA
CriteriaBuilder builder = entityManager.getCriteriaBuilder();
Criteria queries may define multiple roots, the effect of which is to create a Cartesian Product between the newly added root and
the others. Here is an example defining a Cartesian Product between Person and Partner entities:
16.9. Joins
Joins allow navigation from other javax.persistence.criteria.From to either association or embedded attributes. Joins are
created by the numerous overloaded join methods of the javax.persistence.criteria.From interface.
JAVA
CriteriaBuilder builder = entityManager.getCriteriaBuilder();
// Phone.person is a @ManyToOne
Join<Phone , Person > personJoin = root.join( Phone_ .person );
// Person.addresses is an @ElementCollection
Join<Person , String > addressesJoin = personJoin.join( Person_ .addresses );
16.10. Fetches
Just like in HQL and JPQL, criteria queries can specify that associated data be fetched along with the owner. Fetches are created
by the numerous overloaded fetch methods of the javax.persistence.criteria.From interface.
// Phone.person is a @ManyToOne
Fetch <Phone , Person > personFetch = root.fetch( Phone_ .person );
// Person.addresses is an @ElementCollection
Fetch <Person , String > addressesJoin = personFetch.fetch( Person_ .addresses );
Technically speaking, embedded attributes are always fetched with their owner. However, in order to
define the fetching of Phone#addresses we needed a javax.persistence.criteria.Fetch because
element collections are LAZY by default.
JAVA
CriteriaBuilder builder = entityManager.getCriteriaBuilder();
Use the parameter method of javax.persistence.criteria.CriteriaBuilder to obtain a parameter reference. Then use the
parameter reference to bind the parameter value to the javax.persistence.Query .
JAVA
CriteriaBuilder builder = entityManager.getCriteriaBuilder();
criteria.groupBy(root.get("address"));
criteria.multiselect(root.get("address"), builder.count(root));
You may also express queries in the native SQL dialect of your database. This is useful if you want to utilize database-specific
features such as window functions, Common Table Expressions (CTE) or the CONNECT BY option in Oracle. It also provides a clean
migration path from a direct SQL/JDBC based application to Hibernate/JPA. Hibernate also allows you to specify handwritten SQL
(including stored procedures) for all create, update, delete, and retrieve operations.
JAVA
List<Object []> persons = entityManager.createNativeQuery(
"SELECT * FROM Person" )
.getResultList();
JAVA
List<Object []> persons = session.createNativeQuery(
"SELECT * FROM Person" )
.list();
JAVA
List<Object []> persons = session.createNativeQuery(
"SELECT id, name FROM Person" )
.list();
These will return a List of Object arrays ( Object[] ) with scalar values for each column in the PERSON table. Hibernate will
use java.sql.ResultSetMetadata to deduce the actual order and types of the returned scalar values.
To avoid the overhead of using ResultSetMetadata , or simply to be more explicit in what is returned, one can use
addScalar() :
Example 550. Hibernate native query with explicit result set selection
JAVA
List<Object []> persons = session.createNativeQuery(
"SELECT * FROM Person" )
.addScalar( "id", LongType .INSTANCE )
.addScalar( "name", StringType .INSTANCE )
.list();
Although it still returns an Object arrays, this query will not use the ResultSetMetadata anymore since it explicitly gets the
id and name columns as respectively a BigInteger and a String from the underlying ResultSet . This also means that only
these two columns will be returned, even though the query is still using * and the ResultSet contains more than the three
listed columns.
It is possible to leave out the type information for all or some of the scalars.
Example 551. Hibernate native query with result set selection that’s a partially explicit
JAVA
List<Object []> persons = session.createNativeQuery(
"SELECT * FROM Person" )
.addScalar( "id", LongType .INSTANCE )
.addScalar( "name" )
.list();
This is essentially the same query as before, but now ResultSetMetaData is used to determine the type of name , whereas the
type of id is explicitly specified.
How the java.sql.Types returned from ResultSetMetaData is mapped to Hibernate types is controlled by the Dialect . If a
specific type is not mapped, or does not result in the expected type, it is possible to customize it via calls to
registerHibernateType in the Dialect.
JAVA
List<Person > persons = entityManager.createNativeQuery(
"SELECT * FROM Person", Person .class )
.getResultList();
JAVA
List<Person > persons = session.createNativeQuery(
"SELECT * FROM Person" )
.addEntity( Person .class )
.list();
Assuming that Person is mapped as a class with the columns id , name , nickName , address , createdOn , and version , the
following query will also return a List where each element is a Person entity.
Example 554. JPA native query selecting entities with explicit result set
JAVA
List<Person > persons = entityManager.createNativeQuery(
"SELECT id, name, nickName, address, createdOn, version " +
"FROM Person", Person .class )
.getResultList();
Example 555. Hibernate native query selecting entities with explicit result set
JAVA
List<Person > persons = session.createNativeQuery(
"SELECT id, name, nickName, address, createdOn, version " +
"FROM Person" )
.addEntity( Person .class )
.list();
Example 556. JPA native query selecting entities with many-to-one association
JAVA
List<Phone > phones = entityManager.createNativeQuery(
"SELECT id, phone_number, phone_type, person_id " +
"FROM Phone", Phone .class )
.getResultList();
Example 557. Hibernate native query selecting entities with many-to-one association
JAVA
List<Phone > phones = session.createNativeQuery(
"SELECT id, phone_number, phone_type, person_id " +
"FROM Phone" )
.addEntity( Phone .class )
.list();
This will allow the Phone#person to function properly since the many-to-one or one-to-one association is going to use a
proxy that will be initialized when being navigated for the first time.
It is possible to eagerly join the Phone and the Person entities to avoid the possible extra roundtrip for initializing the many-
to-one association.
Example 558. Hibernate native query selecting entities with joined many-to-one association
SQL
SELECT
*
FROM
Phone ph
JOIN
Person pr
ON ph.person_id = pr.id
As seen in the associated SQL query, Hibernate manages to construct the entity hierarchy without
requiring any extra database roundtrip.
By default, when using the addJoin() method, the result set will contain both entities that are joined. To construct the entity
hierarchy, you need to use a ROOT_ENTITY or DISTINCT_ROOT_ENTITY ResultTransformer .
Example 559. Hibernate native query selecting entities with joined many-to-one association and
ResultTransformer
JAVA
List<Person > persons = session.createNativeQuery(
"SELECT * " +
"FROM Phone ph " +
"JOIN Person pr ON ph.person_id = pr.id" )
.addEntity("phone", Phone .class )
.addJoin( "pr", "phone.person")
.setResultTransformer( Criteria .ROOT_ENTITY )
.list();
Because of the ROOT_ENTITY ResultTransformer , this query will return the parent-side as root
entities.
Notice that you added an alias name pr to be able to specify the target property path of the join. It is possible to do the same eager
joining for collections (e.g. the Phone#calls one-to-many association).
Example 560. JPA native query selecting entities with joined one-to-many association
JAVA
List<Phone > phones = entityManager.createNativeQuery(
"SELECT * " +
"FROM Phone ph " +
"JOIN phone_call c ON c.phone_id = ph.id", Phone .class )
.getResultList();
SQL
SELECT *
FROM phone ph
JOIN call c ON c.phone_id = ph.id
Example 561. Hibernate native query selecting entities with joined one-to-many association
JAVA
List<Object []> tuples = session.createNativeQuery(
"SELECT * " +
"FROM Phone ph " +
"JOIN phone_call c ON c.phone_id = ph.id" )
.addEntity("phone", Phone .class )
.addJoin( "c", "phone.calls")
.list();
SQL
SELECT *
FROM phone ph
JOIN call c ON c.phone_id = ph.id
At this stage, you are reaching the limits of what is possible with native queries, without starting to enhance the sql queries to
make them usable in Hibernate. Problems can arise when returning multiple entities of the same type or when the default
alias/column names are not enough.
Until now, the result set column names are assumed to be the same as the column names specified in the mapping document. This
can be problematic for SQL queries that join multiple tables since the same column names can appear in more than one table.
Column alias injection is needed in the following query which otherwise throws NonUniqueDiscoveredSqlAliasException .
Example 562. JPA native query selecting entities with the same column names
JAVA
List<Object > entities = entityManager.createNativeQuery(
"SELECT * " +
"FROM Person pr, Partner pt " +
"WHERE pr.name = pt.name" )
.getResultList();
Example 563. Hibernate native query selecting entities with the same column names
JAVA
List<Object > entities = session.createNativeQuery(
"SELECT * " +
"FROM Person pr, Partner pt " +
"WHERE pr.name = pt.name" )
.list();
The query was intended to return all Person and Partner instances with the same name. The query fails because there is a
conflict of names since the two entities are mapped to the same column names (e.g. id , name , version ). Also, on some
databases, the returned column aliases will most likely be on the form pr.id , pr.name , etc. which are not equal to the columns
specified in the mappings ( id and name ).
Example 564. Hibernate native query selecting entities with the same column names and aliases
JAVA
List<Object > entities = session.createNativeQuery(
"SELECT {pr.*}, {pt.*} " +
"FROM Person pr, Partner pt " +
"WHERE pr.name = pt.name" )
.addEntity( "pr", Person .class )
.addEntity( "pt", Partner .class )
.list();
There’s no such equivalent in JPA because the Query interface doesn’t define an addEntity method
equivalent.
The {pr.} and {pt. } notation used above is shorthand for "all properties". Alternatively, you can list the columns explicitly,
but even in this case, Hibernate injects the SQL column aliases for each property. The placeholder for a column alias is just the
property name qualified by the table alias.
The following table shows the different ways you can use the alias injection. Please note that the alias names in the result are
simply examples, each alias will have a unique and probably different name when used.
There’s no such equivalent in JPA because the Query interface doesn’t define a setResultTransformer
method equivalent.
The above query will return a list of PersonSummaryDTO which has been instantiated and injected the values of id and name
into its corresponding properties or fields.
JAVA
List<CreditCardPayment > payments = session.createNativeQuery(
"SELECT * " +
"FROM Payment p " +
"JOIN CreditCardPayment cp on cp.id = p.id" )
.addEntity( CreditCardPayment .class )
.list();
There’s no such equivalent in JPA because the Query interface doesn’t define an addEntity method
equivalent.
17.9. Parameters
Native SQL queries support positional as well as named parameters:
JAVA
List<Person > persons = entityManager.createNativeQuery(
"SELECT * " +
"FROM Person " +
"WHERE name like :name", Person .class )
.setParameter("name", "J%")
.getResultList();
JAVA
List<Person > persons = session.createNativeQuery(
"SELECT * " +
"FROM Person " +
"WHERE name like :name" )
.addEntity( Person .class )
.setParameter("name", "J%")
.list();
JPA defines the javax.persistence.NamedNativeQuery annotation for this purpose, and the Hibernate
org.hibernate.annotations.NamedNativeQuery annotation extends it and adds the following attributes:
flushMode()
The flush mode for the query. By default, it uses the current Persistence Context flush mode.
cacheable()
Whether the query (results) is cacheable or not. By default, queries are not cached.
cacheRegion()
If the query results are cacheable, name the query cache region to use.
fetchSize()
The number of rows fetched by the JDBC Driver per database trip. The default value is given by the JDBC driver.
timeout()
callable()
Does the SQL query represent a call to a procedure/function? The default is false.
comment()
A comment added to the SQL query for tuning the execution plan.
cacheMode()
The cache mode used for this query. This refers to entities/collections returned by the query. The default value is
CacheModeType.NORMAL .
readOnly()
Whether the results should be read-only. By default, queries are not read-only so entities are stored in the Persistence Context.
JAVA
@NamedNativeQuery(
name = "find_person_name",
query =
"SELECT name " +
"FROM Person "
),
JAVA
List<String > names = entityManager.createNamedQuery(
"find_person_name" )
.getResultList();
JAVA
List<String > names = session.getNamedQuery(
"find_person_name" )
.list();
@NamedNativeQuery(
name = "find_person_name_and_nickName",
query =
"SELECT " +
" name, " +
" nickName " +
"FROM Person "
),
Without specifying an explicit result type, Hibernate will assume an Object array:
Example 573. JPA named native query selecting multiple scalar values
JAVA
List<Object []> tuples = entityManager.createNamedQuery(
"find_person_name_and_nickName" )
.getResultList();
Example 574. Hibernate named native query selecting multiple scalar values
JAVA
List<Object []> tuples = session.getNamedQuery(
"find_person_name_and_nickName" )
.list();
JAVA
@NamedNativeQuery(
name = "find_person_name_and_nickName_dto",
query =
"SELECT " +
" name, " +
" nickName " +
"FROM Person ",
resultSetMapping = "name_and_nickName_dto"
),
@SqlResultSetMapping(
name = "name_and_nickName_dto",
classes = @ConstructorResult(
targetClass = PersonNames .class ,
columns = {
@ColumnResult(name = "name"),
@ColumnResult(name = "nickName")
}
)
)
Example 577. JPA named native query selecting multiple scalar values into a DTO
JAVA
List<PersonNames > personNames = entityManager.createNamedQuery(
"find_person_name_and_nickName_dto" )
.getResultList();
Example 578. Hibernate named native query selecting multiple scalar values into a DTO
JAVA
List<PersonNames > personNames = session.getNamedQuery(
"find_person_name_and_nickName_dto" )
.list();
Example 579. Multiple scalar values using ConstructorResult and Hibernate NamedNativeQuery
JAVA
@NamedNativeQueries({
@NamedNativeQuery(
name = "get_person_phone_count",
query = "SELECT pr.name AS name, count(*) AS phoneCount " +
"FROM Phone p " +
"JOIN Person pr ON pr.id = p.person_id " +
"GROUP BY pr.name",
resultSetMapping = "person_phone_count",
timeout = 1,
readOnly = true
),
})
@SqlResultSetMapping(
name = "person_phone_count",
classes = @ConstructorResult(
targetClass = PersonPhoneCount .class ,
columns = {
@ColumnResult(name = "name"),
@ColumnResult(name = "phoneCount")
}
)
)
Example 580. Hibernate NamedNativeQuery named native query selecting multiple scalar values into a
DTO
JAVA
List<PersonPhoneCount > personNames = session.getNamedNativeQuery(
"get_person_phone_count")
.getResultList();
@NamedNativeQuery(
name = "find_person_by_name",
query =
"SELECT " +
" p.id AS \"id\", " +
" p.name AS \"name\", " +
" p.nickName AS \"nickName\", " +
" p.address AS \"address\", " +
" p.createdOn AS \"createdOn\", " +
" p.version AS \"version\" " +
"FROM Person p " +
"WHERE p.name LIKE :name",
resultClass = Person .class
),
The result set mapping declares the entities retrieved by this native query. Each field of the entity is bound to an SQL alias (or
column name). All fields of the entity including the ones of subclasses and the foreign key columns of related entities have to be
present in the SQL query. Field definitions are optional provided that they map to the same column name as the one declared on
the class property.
JAVA
List<Person > persons = entityManager.createNamedQuery(
"find_person_by_name" )
.setParameter("name", "J%")
.getResultList();
JAVA
List<Person > persons = session.getNamedQuery(
"find_person_by_name" )
.setParameter("name", "J%")
.list();
To join multiple entities, you need to use a SqlResultSetMapping for each entity the SQL query is going to fetch.
@NamedNativeQuery(
name = "find_person_with_phones_by_name",
query =
"SELECT " +
" pr.id AS \"pr.id\", " +
" pr.name AS \"pr.name\", " +
" pr.nickName AS \"pr.nickName\", " +
" pr.address AS \"pr.address\", " +
" pr.createdOn AS \"pr.createdOn\", " +
" pr.version AS \"pr.version\", " +
" ph.id AS \"ph.id\", " +
" ph.person_id AS \"ph.person_id\", " +
" ph.phone_number AS \"ph.number\", " +
" ph.phone_type AS \"ph.type\" " +
"FROM Person pr " +
"JOIN Phone ph ON pr.id = ph.person_id " +
"WHERE pr.name LIKE :name",
resultSetMapping = "person_with_phones"
)
@SqlResultSetMapping(
name = "person_with_phones",
entities = {
@EntityResult(
entityClass = Person .class ,
fields = {
@FieldResult( name = "id", column = "pr.id" ),
@FieldResult( name = "name", column = "pr.name" ),
@FieldResult( name = "nickName", column = "pr.nickName" ),
@FieldResult( name = "address", column = "pr.address" ),
@FieldResult( name = "createdOn", column = "pr.createdOn" ),
@FieldResult( name = "version", column = "pr.version" ),
}
),
@EntityResult(
entityClass = Phone .class ,
fields = {
@FieldResult( name = "id", column = "ph.id" ),
@FieldResult( name = "person", column = "ph.person_id" ),
@FieldResult( name = "number", column = "ph.number" ),
@FieldResult( name = "type", column = "ph.type" ),
}
)
}
),
Example 585. JPA named native entity query with joined associations
JAVA
List<Object []> tuples = entityManager.createNamedQuery(
"find_person_with_phones_by_name" )
.setParameter("name", "J%")
.getResultList();
Example 586. Hibernate named native entity query with joined associations
Finally, if the association to a related entity involve a composite primary key, a @FieldResult element should be used for each
foreign key column. The @FieldResult name is composed of the property name for the relationship, followed by a dot ("."),
followed by the name or the field or property of the primary key. For this example, the following entities are going to be used:
Example 587. Entity associations with composite keys and named native queries
@Embeddable
public class Dimensions {
@Embeddable
public class Identity implements Serializable {
return true;
}
@Entity
public class Captain {
@EmbeddedId
private Identity id;
@Entity
@NamedNativeQueries({
@NamedNativeQuery(name = "find_all_spaceships",
query =
"SELECT " +
" name as \"name\", " +
" model, " +
" speed, " +
" lname as lastn, " +
" fname as firstn, " +
" length, " +
" width, " +
" length * width as surface, " +
" length * width * 10 as volume " +
"FROM SpaceShip",
resultSetMapping = "spaceship"
)
})
@SqlResultSetMapping(
name = "spaceship",
entities = @EntityResult(
entityClass = SpaceShip .class ,
fields = {
@FieldResult(name = "name", column = "name"),
@FieldResult(name = "model", column = "model"),
@FieldResult(name = "speed", column = "speed"),
@FieldResult(name = "captain.lastname", column = "lastn"),
@FieldResult(name = "captain.firstname", column = "firstn"),
@FieldResult(name = "dimensions.length", column = "length"),
@FieldResult(name = "dimensions.width", column = "width"),
}
),
columns = {
@ColumnResult(name = "surface"),
@ColumnResult(name = "volume")
}
)
public class SpaceShip {
@Id
private String name;
Example 588. JPA named native entity query with joined associations and composite keys
JAVA
List<Object []> tuples = entityManager.createNamedQuery(
"find_all_spaceships" )
.getResultList();
Example 589. Hibernate named native entity query with joined associations and composite keys
XML
<property name="hibernate.default_catalog" value="crm"/>
<property name="hibernate.default_schema" value="analytics"/>
This way, we can imply the global crm catalog and analytics schema in every JPQL, HQL or Criteria API query.
However, for native queries, the SQL query is passed as is, therefore you need to explicitly set the global catalog and schema
whenever you are referencing a database table. Fortunately, Hibernate allows you to resolve the current global catalog and
schema using the following placeholders:
{h-catalog}
resolves the current hibernate.default_catalog configuration property value.
{h-schema}
resolves the current hibernate.default_schema configuration property value.
{h-domain}
resolves the current hibernate.default_catalog and hibernate.default_schema configuration property values (e.g.
catalog.schema).
With these placeholders, you can imply the catalog, schema, or both catalog and schema for every native query.
JAVA
@NamedNativeQuery(
name = "last_30_days_hires",
query =
"select * " +
"from {h-domain}person " +
"where age(hired_on) < '30 days'",
resultClass = Person .class
)
Hibernate is going to resolve the {h-domain} placeholder according to the values of the default catalog and schema:
SELECT *
FROM crm.analytics.person
WHERE age(hired_on) < '30 days'
JAVA
statement.executeUpdate(
"CREATE PROCEDURE sp_count_phones (" +
" IN personId INT, " +
" OUT phoneCount INT " +
") " +
"BEGIN " +
" SELECT COUNT(*) INTO phoneCount " +
" FROM Phone p " +
" WHERE p.person_id = personId; " +
"END"
);
To use this stored procedure, you can execute the following JPA 2.1 query:
Example 592. Calling a MySQL stored procedure with OUT parameter type using JPA
JAVA
StoredProcedureQuery query = entityManager.createStoredProcedureQuery( "sp_count_phones");
query.registerStoredProcedureParameter( "personId", Long.class , ParameterMode .IN);
query.registerStoredProcedureParameter( "phoneCount", Long.class , ParameterMode .OUT);
query.setParameter("personId", 1L);
query.execute();
Long phoneCount = (Long) query.getOutputParameterValue("phoneCount");
Example 593. Calling a MySQL stored procedure with OUT parameter type using Hibernate
JAVA
Session session = entityManager.unwrap( Session .class );
If the stored procedure outputs the result directly without an OUT parameter type:
JAVA
statement.executeUpdate(
"CREATE PROCEDURE sp_phones(IN personId INT) " +
"BEGIN " +
" SELECT * " +
" FROM Phone " +
" WHERE person_id = personId; " +
"END"
);
You can retrieve the results of the aforementioned MySQL stored procedure as follows:
Example 595. Calling a MySQL stored procedure and fetching the result set without an OUT parameter
type using JPA
JAVA
StoredProcedureQuery query = entityManager.createStoredProcedureQuery( "sp_phones");
query.registerStoredProcedureParameter( 1, Long.class , ParameterMode .IN);
query.setParameter(1, 1L);
Example 596. Calling a MySQL stored procedure and fetching the result set without an OUT parameter
type using Hibernate
JAVA
Session session = entityManager.unwrap( Session .class );
For a REF_CURSOR result sets, we’ll consider the following Oracle stored procedure:
JAVA
statement.executeUpdate(
"CREATE OR REPLACE PROCEDURE sp_person_phones ( " +
" personId IN NUMBER, " +
" personPhones OUT SYS_REFCURSOR ) " +
"AS " +
"BEGIN " +
" OPEN personPhones FOR " +
" SELECT *" +
" FROM phone " +
" WHERE person_id = personId; " +
"END;"
);
REF_CURSOR result sets are only supported by Oracle and PostgreSQL because other database
systems JDBC drivers don’t support this feature.
This function can be called using the standard Java Persistence API:
JAVA
StoredProcedureQuery query = entityManager.createStoredProcedureQuery( "sp_person_phones" );
query.registerStoredProcedureParameter( 1, Long.class , ParameterMode .IN );
query.registerStoredProcedureParameter( 2, Class .class , ParameterMode .REF_CURSOR );
query.setParameter( 1, 1L );
query.execute();
List<Object []> postComments = query.getResultList();
JAVA
Session session = entityManager.unwrap(Session .class );
JAVA
statement.executeUpdate(
"CREATE FUNCTION fn_count_phones(personId integer) " +
"RETURNS integer " +
"DETERMINISTIC " +
"READS SQL DATA " +
"BEGIN " +
" DECLARE phoneCount integer; " +
" SELECT COUNT(*) INTO phoneCount " +
" FROM Phone p " +
" WHERE p.person_id = personId; " +
" RETURN phoneCount; " +
"END"
);
Because the current StoredProcedureQuery implementation doesn’t yet support SQL functions, we need to use the JDBC syntax.
JAVA
final AtomicReference <Integer > phoneCount = new AtomicReference <>();
Session session = entityManager.unwrap( Session .class );
session.doWork( connection -> {
try (CallableStatement function = connection.prepareCall(
"{ ? = call fn_count_phones(?) }" )) {
function .registerOutParameter( 1, Types .INTEGER );
function .setInt( 2, 1 );
function .execute();
phoneCount.set( function .getInt( 1 ) );
}
} );
Since these servers can return multiple result sets and update counts, Hibernate will iterate the
results and take the first result that is a result set as its return value, so everything else will be
discarded.
For SQL Server, if you can enable SET NOCOUNT ON in your procedure it will probably be more efficient, but this is
not a requirement.
@NamedStoredProcedureQueries(
@NamedStoredProcedureQuery(
name = "sp_person_phones",
procedureName = "sp_person_phones",
parameters = {
@StoredProcedureParameter(
name = "personId",
type = Long.class ,
mode = ParameterMode .IN
),
@StoredProcedureParameter(
name = "personPhones",
type = Class .class ,
mode = ParameterMode .REF_CURSOR
)
}
)
)
Example 603. Calling an Oracle REF_CURSOR stored procedure using a JPA named query
JAVA
List<Object []> postComments = entityManager
.createNamedStoredProcedureQuery( "sp_person_phones" )
.setParameter( "personId", 1L )
.getResultList();
17.14. Custom SQL for CRUD (Create, Read, Update and Delete)
Hibernate can use custom SQL for CRUD operations. The SQL can be overridden at the statement level or individual column level.
This section describes statement overrides. For columns, see Column transformers: read and write expressions.
The following example shows how to define custom SQL operations using annotations. @SQLInsert , @SQLUpdate , and
@SQLDelete override the INSERT, UPDATE, DELETE statements of a given entity. For the SELECT clause, a @Loader must be
defined along with a @NamedNativeQuery used for loading the underlying table record.
For collections, Hibernate allows defining a custom @SQLDeleteAll which is used for removing all child records associated with
a given parent entity. To filter collections, the @Where annotation allows customizing the underlying SQL WHERE clause.
@Entity(name = "Person")
@SQLInsert(
sql = "INSERT INTO person (name, id, valid) VALUES (?, ?, true) ",
check = ResultCheckStyle .COUNT
)
@SQLUpdate(
sql = "UPDATE person SET name = ? where id = ? "
)
@SQLDelete(
sql = "UPDATE person SET valid = false WHERE id = ? "
)
@Loader(namedQuery = "find_valid_person")
@NamedNativeQueries({
@NamedNativeQuery(
name = "find_valid_person",
query = "SELECT id, name " +
"FROM person " +
"WHERE id = ? and valid = true",
resultClass = Person .class
)
})
public static class Person {
@Id
@GeneratedValue
private Long id;
@ElementCollection
@SQLInsert(
sql = "INSERT INTO person_phones (person_id, phones, valid) VALUES (?, ?, true) ")
@SQLDeleteAll(
sql = "UPDATE person_phones SET valid = false WHERE person_id = ?")
@Where( clause = "valid = true" )
private List<String > phones = new ArrayList <>();
In the example above, the entity is mapped so that entries are soft-deleted (the records are not removed from the database, but
instead, a flag marks the row validity). The Person entity benefits from custom INSERT, UPDATE, and DELETE statements which
update the valid column accordingly. The custom @Loader is used to retrieve only Person rows that are valid.
The same is done for the phones collection. The @SQLDeleteAll and the SQLInsert queries are used whenever the collection
is modified.
You can also call a store procedure using the custom CRUD statements. The only requirement is to set
the callable attribute to true .
To check that the execution happens correctly, Hibernate allows you to define one of those three strategies:
none: no check is performed; the store procedure is expected to fail upon constraint violations
count: use of row-count returned by the executeUpdate() method call to check that the update was successful
The parameter order is important and is defined by the order Hibernate handles properties. You can
see the expected order by enabling debug logging, so Hibernate can print out the static SQL that is
used to create, update, delete etc. entities.
To see the expected sequence, remember to not include your custom SQL through annotations or mapping files
as that will override the Hibernate generated static sql.
Overriding SQL statements for secondary tables is also possible using @org.hibernate.annotations.Table and the
sqlInsert , sqlUpdate , sqlDelete attributes.
@Entity(name = "Person")
@Table(name = "person")
@SQLInsert(
sql = "INSERT INTO person (name, id, valid) VALUES (?, ?, true) "
)
@SQLDelete(
sql = "UPDATE person SET valid = false WHERE id = ? "
)
@SecondaryTable(name = "person_details",
pkJoinColumns = @PrimaryKeyJoinColumn(name = "person_id"))
@org.hibernate.annotations.Table (
appliesTo = "person_details",
sqlInsert = @SQLInsert(
sql = "INSERT INTO person_details (image, person_id, valid) VALUES (?, ?, true) ",
check = ResultCheckStyle .COUNT
),
sqlDelete = @SQLDelete(
sql = "UPDATE person_details SET valid = false WHERE person_id = ? "
)
)
@Loader(namedQuery = "find_valid_person")
@NamedNativeQueries({
@NamedNativeQuery(
name = "find_valid_person",
query = "SELECT " +
" p.id, " +
" p.name, " +
" pd.image " +
"FROM person p " +
"LEFT OUTER JOIN person_details pd ON p.id = pd.person_id " +
"WHERE p.id = ? AND p.valid = true AND pd.valid = true",
resultClass = Person .class
)
})
public static class Person {
@Id
@GeneratedValue
private Long id;
The SQL is directly executed in your database, so you can use any dialect you like. This will, however,
reduce the portability of your mapping if you use database-specific SQL.
You can also use stored procedures for customizing the CRUD statements.
JAVA
statement.executeUpdate(
"CREATE OR REPLACE PROCEDURE sp_delete_person ( " +
" personId IN NUMBER ) " +
"AS " +
"BEGIN " +
" UPDATE person SET valid = 0 WHERE id = personId; " +
"END;"
);}
The entity can use this stored procedure to soft-delete the entity in question:
Example 607. Customizing the entity delete statement to use the Oracle stored procedure= instead
JAVA
@SQLDelete(
sql = "{ call sp_delete_person( ? ) } ",
callable = true
)
You need to set the callable attribute when using a stored procedure instead of an SQL statement.
18. Spatial
18.1. Overview
Hibernate Spatial was originally developed as a generic extension to Hibernate for handling geographic data. Since 5.0, Hibernate
Spatial is now part of the Hibernate ORM project, and it allows you to deal with geographic data in a standardized way.
Hibernate Spatial provides a standardized, cross-database interface to geographic data storage and query functions. It supports
most of the functions described by the OGC Simple Feature Specification. Supported databases are Oracle 10g/11g,
PostgreSQL/PostGIS, MySQL, Microsoft SQL Server and H2/GeoDB.
Spatial data types are not part of the Java standard library, and they are absent from the JDBC specification. Over the years JTS
(http://tsusiatsoftware.net/jts/main.html) has emerged the de facto standard to fill this gap. JTS is an implementation of the Simple
Feature Specification (SFS) (https://portal.opengeospatial.org/files/?artifact_id=829). Many databases on the other hand implement the
SQL/MM - Part 3: Spatial Data specification - a related, but broader specification. The biggest difference is that SFS is limited to 2D
geometries in the projected plane (although JTS supports 3D coordinates), whereas SQL/MM supports 2-, 3- or 4-dimensional
coordinate spaces.
Hibernate Spatial supports two different geometry models: JTS (http://tsusiatsoftware.net/jts/main.html) and geolatte-geom
(https://github.com/GeoLatte/geolatte-geom). As already mentioned, JTS is the de facto standard. Geolatte-geom (also written by the lead
developer of Hibernate Spatial) is a more recent library that supports many features specified in SQL/MM but not available in JTS
(such as support for 4D geometries, and support for extended WKT/WKB formats). Geolatte-geom also implements
encoders/decoders for the database native types. Geolatte-geom has good interoperability with JTS. Converting a Geolatte
geometry to a JTS `geometry, for instance, doesn’t require copying of the coordinates. It also delegates spatial processing to JTS.
Whether you use JTS or Geolatte-geom, Hibernate spatial maps the database spatial types to your geometry model of choice. It
will, however, always use Geolatte-geom to decode the database native types.
Hibernate Spatial also makes a number of spatial functions available in HQL and in the Criteria Query API. These functions are
specified in both SQL/MM as SFS, and are commonly implemented in databases with spatial support (see Hibernate Spatial dialect
function support)
18.2. Configuration
Hibernate Spatial requires some configuration prior to start using it.
18.2.1. Dependency
You need to include the hibernate-spatial dependency in your build environment. For Maven, you need to add the following
dependency:
XML
<dependency>
<groupId> org.hibernate</groupId>
<artifactId> hibernate-spatial</artifactId>
<version> ${hibernate.version}</version>
</dependency>
18.2.2. Dialects
Hibernate Spatial extends the Hibernate ORM dialects so that the spatial functions of the database are made available within HQL
and JPQL. So, for instance, instead of using the PostgreSQL82Dialect , we use the Hibernate Spatial extension of that dialect
which is the PostgisDialect .
XML
<property
name="hibernate.dialect"
value="org.hibernate.spatial.dialect.postgis.PostgisDialect"
/>
Not all databases support all the functions defined by Hibernate Spatial. The table below provides an overview of the functions
provided by each database. If the function is defined in the Simple Feature Specification
(https://portal.opengeospatial.org/files/?artifact_id=829), the description references the relevant section.
Basic functions on
Geometry
Geometry)
the specified
distance of
one another
Geometry Returns a
transform(Geometry, new
int) geometry
with its
coordinates
transformed
to the SRID
referenced
by the
integer
parameter
Geometry Returns a
extent(Geometry) bounding
box that
bounds the
set of
returned
geometries
Postgis
For Postgis from versions 1.3 and later, the best dialect to use is
org.hibernate.spatial.dialect.postgis.PostgisDialect .
This translates the HQL spatial functions to the Postgis SQL/MM-compliant functions. For older, pre v1.3 versions of Postgis,
which are not SQL/MM compliant, the dialect org.hibernate.spatial.dialect.postgis.PostgisNoSQLMM is provided.
MySQL
There are several dialects for MySQL:
MySQLSpatialDialect
MySQL5SpatialDialect
MySQLSpatial56Dialect
MySQL versions before 5.6.1 had only limited support for spatial operators. Most operators only took
account of the minimum bounding rectangles (MBR) of the geometries, and not the geometries
themselves.
This changed in version 5.6.1, when MySQL introduced ST_* spatial operators. The dialect
MySQLSpatial56Dialect uses these newer, more precise operators.
These dialects may, therefore, produce results that differ from that of the other spatial dialects.
For more information, see this page in the MySQL reference guide (esp. the section Functions That Test Spatial
Relations Between Geometry Objects (https://dev.mysql.com/doc/refman/5.7/en/spatial-relation-functions.html))
Oracle10g/11g
There is currently only one Oracle spatial dialect: OracleSpatial10gDialect which extends the Hibernate dialect
Oracle10gDialect . This dialect has been tested on both Oracle 10g and Oracle 11g with the SDO_GEOMETRY spatial database
type.
hibernate.spatial.connection_finder
the fully-qualified class name for the implementation of the ConnectionFinder to use (see below).
When the passed object is not already an OracleConnection , the default implementation will attempt to
retrieve the OracleConnection by recursive reflection. It will search for methods that return Connection objects,
execute these methods and check the result. If the result is of type OracleConnection the object is returned.
Otherwise, it recurses on it.
In may cases, this strategy will suffice. If not, you can provide your own implementation of this interface on the
classpath, and configure it in the hibernate.spatial.connection_finder property. Note that implementations
must be thread-safe and have a default no-args constructor.
SQL Server
The dialect SqlServer2008Dialect supports the GEOMETRY type in SQL Server 2008 and later.
GeoDB (H2)
The GeoDBDialect supports the GeoDB a spatial extension of the H2 in-memory database.
DB2
The DB2SpatialDialect supports the spatial extensions of the DB2 LUW database. The dialect has been tested with DB2 LUW
11.1. The dialect does not support DB2 for z/OS or DB2 column-oriented databases.
In order to use the DB2 Hibernate Spatial capabilities, it is necessary to first execute the following SQL
statements which will allow DB2 to accept Extended WellKnown Text (EWKT) data and return EWKT
data. One way to do this is to copy these statements into a file such as ewkt.sql and execute it in a
DB2 command window with a command like 'db2 -tvf ewkt.sql'.
SQL
18.3. Types
Hibernate Spatial comes with the following types:
jts_geometry
Handled by org.hibernate.spatial.JTSGeometryType it maps a database geometry column type to a
com.vividsolutions.jts.geom.Geometry entity property type.
geolatte_geometry
Handled by org.hibernate.spatial.GeolatteGeometryType , it maps a database geometry column type to an
org.geolatte.geom.Geometry entity property type.
It suffices to declare a property as either a JTS or a Geolatte-geom Geometry and Hibernate Spatial will map it using the relevant
type.
JAVA
import com.vividsolutions.jts.geom.Point ;
@Entity(name = "Event")
public static class Event {
@Id
private Long id;
JAVA
Event event = new Event ();
event .setId( 1L);
event .setName( "Hibernate ORM presentation");
Point point = geometryFactory.createPoint( new Coordinate ( 10, 5 ) );
event .setLocation( point );
entityManager.persist( event );
Spatial Dialects defines many query functions that are available both in HQL and JPQL queries. Below we show how we could use
the within function to find all objects within a given spatial extent or window.
19. Multitenancy
Each approach has pros and cons as well as specific techniques and considerations. Such topics are
beyond the scope of this documentation. Many resources exist which delve into these other topics, like
this one (http://msdn.microsoft.com/en-us/library/aa479086.aspx) which does a great job of covering
these topics.
Each tenant’s data is kept in a physically separate database instance. JDBC Connections would point specifically to each database
so any pooling would be per-tenant. A general application approach, here, would be to define a JDBC Connection pool per-tenant
and to select the pool to use based on the tenant identifier associated with the currently logged in user.
Each tenant’s data is kept in a distinct database schema on a single database instance. There are two different ways to define JDBC
Connections here:
Connections could point specifically to each schema as we saw with the Separate database approach. This is an option
provided that the driver supports naming the default schema in the connection URL or if the pooling mechanism supports
naming a schema to use for its Connections. Using this approach, we would have a distinct JDBC Connection pool per-tenant
where the pool to use would be selected based on the "tenant identifier" associated with the currently logged in user.
Connections could point to the database itself (using some default schema) but the Connections would be altered using the SQL
SET SCHEMA (or similar) command. Using this approach, we would have a single JDBC Connection pool for use to service all
tenants, but before using the Connection, it would be altered to reference the schema named by the "tenant identifier"
associated with the currently logged in user.
All data is kept in a single database schema. The data for each tenant is partitioned by the use of partition value or discriminator.
The complexity of this discriminator might range from a simple column value to a complex SQL formula. Again, this approach
would use a single Connection pool to service all tenants. However, in this approach, the application needs to alter each and every
SQL statement sent to the database to reference the "tenant identifier" discriminator.
JAVA
private void doInSession(String tenant, Consumer <Session > function ) {
Session session = null;
Transaction txn = null;
try {
session = sessionFactory
.withOptions()
.tenantIdentifier( tenant )
.openSession();
txn = session.getTransaction();
txn.begin ();
function .accept(session);
txn.commit();
} catch (Throwable e) {
if ( txn != null ) txn.rollback();
throw e;
} finally {
if (session != null) {
session.close();
}
}
}
Additionally, when specifying the configuration, an org.hibernate.MultiTenancyStrategy should be named using the
hibernate.multiTenancy setting. Hibernate will perform validations based on the type of strategy you specify. The strategy
here correlates with the isolation approach discussed above.
NONE
(the default) No multitenancy is expected. In fact, it is considered an error if a tenant identifier is specified when opening a
session using this strategy.
SCHEMA
Correlates to the separate schema approach. It is an error to attempt to open a session without a tenant identifier using this
strategy. Additionally, a MultiTenantConnectionProvider must be specified.
DATABASE
Correlates to the separate database approach. It is an error to attempt to open a session without a tenant identifier using this
strategy. Additionally, a MultiTenantConnectionProvider must be specified.
DISCRIMINATOR
Correlates to the partitioned (discriminator) approach. It is an error to attempt to open a session without a tenant identifier
using this strategy. This strategy is not yet implemented and you can follow its progress via the HHH-6054 Jira issue
(https://hibernate.atlassian.net/browse/HHH-6054).
19.4.1. MultiTenantConnectionProvider
When using either the DATABASE or SCHEMA approach, Hibernate needs to be able to obtain Connections in a tenant-specific
manner.
That is the role of the MultiTenantConnectionProvider contract. Application developers will need to provide an
implementation of this contract.
Most of its methods are extremely self-explanatory. The only ones which might not be are getAnyConnection and
releaseAnyConnection . It is important to note also that these methods do not accept the tenant identifier. Hibernate uses these
methods during startup to perform various configuration, mainly via the java.sql.DatabaseMetaData object.
If none of the above options match, but the settings do specify a hibernate.connection.datasource value, Hibernate will
assume it should use the specific DataSourceBasedMultiTenantConnectionProviderImpl implementation which works on
a number of pretty reasonable assumptions when running inside of an app server and using one javax.sql.DataSource per
tenant. See its Javadocs
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/engine/jdbc/connections
/spi/DataSourceBasedMultiTenantConnectionProviderImpl.html)
for more details.
JAVA
public class ConfigurableMultiTenantConnectionProvider
extends AbstractMultiTenantConnectionProvider {
public ConfigurableMultiTenantConnectionProvider (
Map<String , ConnectionProvider > connectionProviderMap) {
this.connectionProviderMap.putAll( connectionProviderMap );
}
@Override
protected ConnectionProvider getAnyConnectionProvider() {
return connectionProviderMap.values().iterator().next();
}
@Override
protected ConnectionProvider selectConnectionProvider(String tenantIdentifier) {
return connectionProviderMap.get( tenantIdentifier );
}
}
JAVA
private void init() {
registerConnectionProvider( FRONT_END_TENANT );
registerConnectionProvider( BACK_END_TENANT );
sessionFactory = sessionFactory(settings);
}
DriverManagerConnectionProviderImpl connectionProvider =
new DriverManagerConnectionProviderImpl ();
connectionProvider.configure( properties );
connectionProviderMap.put( tenantIdentifier, connectionProvider );
}
When using multitenancy, it’s possible to save an entity with the same identifier across different tenants:
19.4.2. CurrentTenantIdentifierResolver
org.hibernate.context.spi.CurrentTenantIdentifierResolver is a contract for Hibernate to be able to resolve what the
application considers the current tenant identifier. The implementation to use is either passed directly to Configuration via its
setCurrentTenantIdentifierResolver method. It can also be specified via the hibernate.tenant_identifier_resolver
setting.
The first situation is when the application is using the org.hibernate.context.spi.CurrentSessionContext feature in
conjunction with multitenancy. In the case of the current-session feature, Hibernate will need to open a session if it cannot
find an existing one in scope. However, when a session is opened in a multitenant environment, the tenant identifier has to be
specified. This is where the CurrentTenantIdentifierResolver comes into play; Hibernate will consult the implementation
you provide to determine the tenant identifier to use when opening the session. In this case, it is required that a
CurrentTenantIdentifierResolver is supplied.
The other situation is when you do not want to have to explicitly specify the tenant identifier all the time. If a
CurrentTenantIdentifierResolver has been specified, Hibernate will use it to determine the default tenant identifier to
use when opening the session.
19.4.3. Caching
Multitenancy support in Hibernate works seamlessly with the Hibernate second level cache. The key used to cache data encodes
the tenant identifier.
Currently, schema export will not really work with multitenancy. That may not change.
The JPA expert group is in the process of defining multitenancy support for an upcoming version of the
specification.
20. OSGi
20.2. hibernate-osgi
Rather than embed OSGi capabilities into hibernate-core, and sub-modules, hibernate-osgi was created. It’s purposefully
separated, isolating all OSGi dependencies. It provides an OSGi-specific ClassLoader (aggregates the container’s ClassLoader
with core and EntityManager ClassLoader`s), JPA persistence provider, `SessionFactory / EntityManagerFactory
bootstrapping, entities/mappings scanner, and service management.
20.3. features.xml
Apache Karaf environments tend to make heavy use of its "features" concept, where a feature is a set of order-specific bundles
focused on a concise capability. These features are typically defined in a features.xml file. Hibernate produces and releases its
own features.xml that defines a core hibernate-orm , as well as additional features for optional functionality (caching,
Envers, etc.). This is included in the binary distribution, as well as deployed to the JBoss Nexus repository (using the
org.hibernate groupId and hibernate-osgi with the karaf.xml classifier).
Note that our features are versioned using the same ORM artifact versions they wrap. Also, note that the features are heavily
tested against Karaf 3.0.3 as a part of our PaxExam-based integration tests. However, they’ll likely work on other versions as well.
hibernate-osgi, theoretically, supports a variety of OSGi containers, such as Equinox. In that case, please use `features.xm`l as a
reference for necessary bundles to activate and their correct ordering. However, note that Karaf starts a number of bundles
automatically, several of which would need to be installed manually on alternatives.
20.4. QuickStarts/Demos
All three configurations have a QuickStart/Demo available in the hibernate-demos (https://github.com/hibernate/hibernate-demos)
project:
20.7. persistence.xml
Similar to any other JPA setup, your bundle must include a persistence.xml file. This is typically located in META-INF .
20.8. DataSource
Typical Enterprise OSGi JPA usage includes a DataSource installed in the container. Your bundle’s persistence.xml calls out
the DataSource through JNDI. For example, you could install the following H2 DataSource . You can deploy the DataSource
manually (Karaf has a deploy dir), or through a "blueprint bundle" ( blueprint:file:/[PATH]/datasource-h2.xml ).
XML
<?xml version="1.0" encoding="UTF-8"?>
<!--
First install the H2 driver using:
> install -s mvn:com.h2database/h2/1.3.163
That DataSource is then used by your persistence.xml persistence-unit. The following works in Karaf, but the names may
need tweaked in alternative containers.
XML
<jta-data-source> osgi:service/javax.sql.DataSource/(osgi.jndi.service.name=jdbc/h2ds)</jta-data-source>
javax.persistence
org.hibernate.proxy and javassist.util.proxy , due to Hibernate’s ability to return proxies for lazy initialization
(Javassist enhancement occurs on the entity’s ClassLoader during runtime).
XML
<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns:jpa="http://aries.apache.org/xmlns/jpa/v1.0.0"
xmlns:tx="http://aries.apache.org/xmlns/transactions/v1.0.0"
default-activation="eager"
xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">
<!-- This gets the container-managed EntityManager and injects it into the DataPointServiceImpl bean.
Assumes DataPointServiceImpl has an "entityManager" field with a getter and setter. -->
<bean id="dpService" class="org.hibernate.osgitest.DataPointServiceImpl">
<jpa:context unitname="managed-jpa" property="entityManager"/>
<tx:transaction method="*" value="Required"/>
</bean>
</blueprint>
20.12. persistence.xml
Similar to any other JPA setup, your bundle must include a persistence.xml file. This is typically located in META-INF .
javax.persistence
org.hibernate.proxy and javassist.util.proxy , due to Hibernate’s ability to return proxies for lazy initialization
(Javassist enhancement occurs on the entity’s ClassLoader during runtime)
It is VITAL that your EntityManagerFactory be obtained through the service, rather than creating it
manually. The service handles the OSGi ClassLoader , discovered extension points, scanning, etc.
Manually creating an EntityManagerFactory is guaranteed to NOT work during runtime!
JAVA
public class HibernateUtil {
emf = persistenceProvider.createEntityManagerFactory(
"YourPersistenceUnitName",
null
);
}
return emf;
}
}
javax.persistence
org.hibernate.proxy and javassist.util.proxy , due to Hibernate’s ability to return proxies for lazy initialization
(Javassist enhancement occurs on the entity’s ClassLoader during runtime)
It is VITAL that your SessionFactory be obtained through the service, rather than creating it manually.
The service handles the OSGi ClassLoader , discovered extension points, scanning, etc. Manually
creating a SessionFactory is guaranteed to NOT work during runtime!
JAVA
public class HibernateUtil {
ServiceReference sr = context.getServiceReference(
SessionFactory .class .getName()
);
sf = ( SessionFactory ) context.getService( sr );
}
return sf;
}
}
org.hibernate.integrator.spi.Integrator
(as of 4.2)
org.hibernate.boot.registry.selector.StrategyRegistrationProvider
(as of 4.3)
org.hibernate.boot.model.TypeContributor
(as of 4.3)
JTA’s
javax.transaction.TransactionManager and javax.transaction.UserTransaction (as of 4.2). However, these are
typically provided by the OSGi container.
The easiest way to register extension point implementations is through a blueprint.xml file. Add OSGI-INF/blueprint
/blueprint.xml to your classpath. Envers' blueprint is a great example:
XML
<!--
~ Hibernate, Relational Persistence for Idiomatic Java
~
~ License: GNU Lesser General Public License (LGPL), version 2.1 or later.
~ See the lgpl.txt file in the root directory or <http://www.gnu.org/licenses/lgpl-2.1.html>.
-->
<blueprint default-activation="eager"
xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">
<bean id="typeContributor"
class="org.hibernate.envers.boot.internal.TypeContributorImpl"/>
<service ref="typeContributor" interface="org.hibernate.boot.model.TypeContributor"/>
</blueprint>
Extension points can also be registered programmatically with BundleContext#registerService , typically within your
BundleActivator#start .
20.20. Caveats
Technically, multiple persistence units are supported by Enterprise OSGi JPA and unmanaged Hibernate JPA use. However, we
cannot currently support this in OSGi. In Hibernate 4, only one instance of the OSGi-specific ClassLoader is used per
Hibernate bundle, mainly due to heavy use of static TCCL utilities. We hope to support one OSGi ClassLoader per persistence
unit in Hibernate 5.
Scanning is supported to find non-explicitly listed entities and mappings. However, they MUST be in the same bundle as your
persistence unit (fairly typical anyway). Our OSGi ClassLoader only considers the "requesting bundle" (hence the
requirement on using services to create EntityManagerFactory / SessionFactory ), rather than attempting to scan all
available bundles. This is primarily for versioning considerations, collision protection, etc.
Some containers (ex: Aries) always return true for PersistenceUnitInfo#excludeUnlistedClasses , even if your
persistence.xml explicitly has exclude-unlisted-classes set to false . They claim it’s to protect JPA providers from
having to implement scanning ("we handle it for you"), even though we still want to support it in many cases. The workaround
is to set hibernate.archive.autodetection to, for example, hbm,class . This tells hibernate to ignore the
excludeUnlistedClasses value and scan for *.hbm.xml and entities regardless.
Currently, Hibernate OSGi is primarily tested using Apache Karaf and Apache Aries JPA. Additional testing is needed with
Equinox, Gemini, and other container providers.
Hibernate ORM has many dependencies that do not currently provide OSGi manifests. The QuickStart tutorials make heavy
use of 3rd party bundles (SpringSource, ServiceMix) or the wrap:… operator.
21. Envers
21.1. Basics
To audit changes that are performed on an entity, you only need two things:
Unlike in previous versions, you no longer need to specify listeners in the Hibernate configuration file.
Just putting the Envers jar on the classpath is enough because listeners will be registered
automatically.
And that’s all. You can create, modify and delete the entities as always.
The use of JPA’s CriteriaUpdate and CriteriaDelete bulk operations are not currently supported by
Envers due to how an entity’s lifecycle events are dispatched. Such operations should be avoided as
they’re not captured by Envers and leads to incomplete audit history.
If you look at the generated schema for your entities, or at the data persisted by Hibernate, you will notice that there are no
changes. However, for each audited entity, a new table is introduced - entity_table_AUD , which stores the historical data,
whenever you commit a transaction.
Envers automatically creates audit tables if hibernate.hbm2ddl.auto option is set to create , create-
drop or update . Appropriate DDL statements can also be generated with an Ant task in Generating
Envers schema with Hibernate hbm2ddl tool.
Considering we have a Customer entity, when annotating it with the Audited annotation, Hibernate is going to generate the
following tables using the hibernate.hbm2ddl.auto schema tool:
JAVA
@Audited
@Entity(name = "Customer")
public static class Customer {
@Id
private Long id;
Instead of annotating the whole class and auditing all properties, you can annotate only some persistent properties with
@Audited . This will cause only these properties to be audited.
Now, considering the previous Customer entity, let’s see how Envers auditing works when inserting, updating, and deleting the
entity in question.
JAVA
Customer customer = new Customer ();
customer.setId( 1L );
customer.setFirstName( "John" );
customer.setLastName( "Doe" );
entityManager.persist( customer );
insert
into
Customer
(created_on, firstName, lastName, id)
values
(?, ?, ?, ?)
insert
into
REVINFO
(REV, REVTSTMP)
values
(?, ?)
insert
into
Customer_AUD
(REVTYPE, created_on, firstName, lastName, id, REV)
values
(?, ?, ?, ?, ?, ?)
JAVA
Customer customer = entityManager.find( Customer .class , 1L );
customer.setLastName( "Doe Jr." );
update
Customer
set
created_on=?,
firstName=?,
lastName=?
where
id=?
insert
into
REVINFO
(REV, REVTSTMP)
values
(?, ?)
insert
into
Customer_AUD
(REVTYPE, created_on, firstName, lastName, id, REV)
values
(?, ?, ?, ?, ?, ?)
JAVA
Customer customer = entityManager.getReference( Customer .class , 1L );
entityManager.remove( customer );
delete
from
Customer
where
id = ?
insert
into
REVINFO
(REV, REVTSTMP)
values
(?, ?)
insert
into
Customer_AUD
(REVTYPE, created_on, firstName, lastName, id, REV)
values
(?, ?, ?, ?, ?, ?)
The audit (history) of an entity can be accessed using the AuditReader interface, which can be obtained by having an open
EntityManager or Session via the AuditReaderFactory .
JAVA
List<Number > revisions = doInJPA( this::entityManagerFactory, entityManager -> {
return AuditReaderFactory .get( entityManager ).getRevisions(
Customer .class ,
1L
);
} );
SQL
select
c.REV as col_0_0_
from
Customer_AUD c
cross join
REVINFO r
where
c.id = ?
and c.REV = r.REV
order by
c.REV asc
Using the previously fetched revisions, we can now inspect the state of the Customer entity at that particular revision:
Example 628. Getting the first revision for the Customer entity
JAVA
Customer customer = (Customer ) AuditReaderFactory
.get( entityManager )
.createQuery()
.forEntitiesAtRevision( Customer .class , revisions.get( 0 ) )
.getSingleResult();
assertEquals("Doe", customer.getLastName());
SQL
select
c.id as id1_1_,
c.REV as REV2_1_,
c.REVTYPE as REVTYPE3_1_,
c.created_on as created_4_1_,
c.firstName as firstNam5_1_,
c.lastName as lastName6_1_
from
Customer_AUD c
where
c.REV = (
select
max( c_max.REV )
from
Customer_AUD c_max
where
c_max.REV <= ?
and c.id = c_max.id
)
and c.REVTYPE <> ?
When executing the aforementioned SQL query, there are two parameters:
revision_number
The first parameter marks the revision number we are interested in or the latest one that exists up to this particular revision.
revision_type
The second parameter specifies that we are not interested in DEL RevisionType so that deleted entries are filtered out.
The same goes for the second revision associated with the UPDATE statement.
Example 629. Getting the second revision for the Customer entity
JAVA
Customer customer = (Customer ) AuditReaderFactory
.get( entityManager )
.createQuery()
.forEntitiesAtRevision( Customer .class , revisions.get( 1 ) )
.getSingleResult();
For the deleted entity revision, Envers throws a NoResultException since the entity was no longer valid at that revision.
Example 630. Getting the third revision for the Customer entity
JAVA
try {
Customer customer = (Customer ) AuditReaderFactory
.get( entityManager )
.createQuery()
.forEntitiesAtRevision( Customer .class , revisions.get( 2 ) )
.getSingleResult();
You can use the forEntitiesAtRevision(Class<T> cls, String entityName, Number revision, boolean
includeDeletions)
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/envers/query/AuditQueryCreator.html#forEntitiesAtRevision-java.lang.Class-
java.lang.String-java.lang.Number-boolean-)
method to get the deleted entity revision so that, instead of a NoResultException , all attributes, except for the entity identifier,
are going to be null .
Example 631. Getting the third revision for the Customer entity without getting a NoResultException
JAVA
Customer customer = (Customer ) AuditReaderFactory
.get( entityManager )
.createQuery()
.forEntitiesAtRevision(
Customer .class ,
Customer .class .getName(),
revisions.get( 2 ),
true )
.getSingleResult();
org.hibernate.envers.audit_table_prefix
String that will be prepended to the name of an audited entity to create the name of the entity and that will hold audit
information.
If you audit an entity with a table name Person, in the default setting Envers will generate a Person_AUD table to store
historical data.
This is not normally needed, as the data is present in the last-but-one revision. Sometimes, however, it is easier and more
efficient to access it in the last revision (then the data that the entity contained before deletion is stored twice).
If not present, the schema will be the same as the schema of the table being audited.
If not present, the catalog will be the same as the catalog of the normal tables.
The audit strategy that should be used when persisting audit data. The default stores only the revision, at which an entity was
modified.
An alternative, the org.hibernate.envers.strategy.ValidityAuditStrategy stores both the start revision and the end
revision. Together these define when an audit row was valid, hence the name ValidityAuditStrategy.
Partitioning requires a column that exists within the table. This property is only evaluated if the ValidityAuditStrategy is
used.
If the current database engine does not support identity columns, users are advised to set this property to false.
Should entity types, that have been modified during each revision, be tracked. The default implementation creates
REVCHANGES table that stores entity names of modified persistent objects. Single record encapsulates the revision identifier
(foreign key to REVINFO table) and a string value. For more information, refer to Tracking entity names modified during
revisions and Querying for entity types modified in a given revision.
Should property modification flags be stored for all audited entities and all properties.
When set to true, for all properties an additional boolean column in the audit tables will be created, filled with information if
the given property changed in the given revision.
When set to false, such column can be added to selected entities or properties using the @Audited annotation.
For more information, refer to Tracking entity changes at the property level and Querying for entity revisions that modified a
given property.
For example, a property called "age", will by default get modified flag with column name "age_MOD".
The following configuration options have been added recently and should be regarded as
experimental:
1. org.hibernate.envers.track_entities_changed_in_revision
2. org.hibernate.envers.using_modified_flag
3. org.hibernate.envers.modified_flag_suffix
4. org.hibernate.envers.original_id_prop_name
If you have a mapping with secondary tables, audit tables for them will be generated in the same way (by adding the prefix and
suffix). If you wish to overwrite this behavior, you can use the @SecondaryAuditTable and @SecondaryAuditTables
annotations.
If you’d like to override auditing behavior of some fields/properties inherited from @MappedSuperclass or in an embedded
component, you can apply the @AuditOverride annotation on the subtype or usage site of the component.
If you want to audit a relation mapped with @OneToMany and @JoinColumn , please see Mapping exceptions for a description of
the additional @AuditJoinTable annotation that you’ll probably want to use.
If you want to audit a relation, where the target entity is not audited (that is the case for example with dictionary-like entities,
which don’t change and don’t have to be audited), just annotate it with @Audited( targetAuditMode =
RelationTargetAuditMode.NOT_AUDITED ) . Then, while reading historic versions of your entity, the relation will always point to
the "current" related entity. By default Envers throws javax.persistence.EntityNotFoundException when "current" entity
does not exist in the database. Apply @NotFound( action = NotFoundAction.IGNORE ) annotation to silence the exception and
assign null value instead. The hereby solution causes implicit eager loading of to-one relations.
If you’d like to audit properties of a superclass of an entity, which are not explicitly audited (they don’t have the @Audited
annotation on any properties or on the class), you can set the @AuditOverride( forClass = SomeEntity.class, isAudited =
true/false ) annotation.
The @Audited annotation also features an auditParents attribute but it’s now deprecated in favor of
@AuditOverride ,
1. The default audit strategy persists the audit data together with a start revision. For each row inserted, updated or deleted in
an audited table, one or more rows are inserted in the audit tables, together with the start revision of its validity. Rows in the
audit tables are never updated after insertion. Queries of audit information use subqueries to select the applicable rows in the
audit tables.
2. The alternative is a validity audit strategy. This strategy stores the start-revision and the end-revision of audit information. For
each row inserted, updated or deleted in an audited table, one or more rows are inserted in the audit tables, together with the
start revision of its validity. But at the same, time the end-revision field of the previous audit rows (if available) is set to this
revision. Queries on the audit information can then use 'between start and end revision' instead of subqueries as used by the
default audit strategy.
The consequence of this strategy is that persisting audit information will be a bit slower because of the extra updates
involved, but retrieving audit information will be a lot faster.
JAVA
options.put(
EnversSettings .AUDIT_STRATEGY,
ValidityAuditStrategy .class .getName()
);
If, you’re using the persistence.xml configuration file, then the mapping will looks as follows:
XML
<property
name="org.hibernate.envers.audit_strategy"
value="org.hibernate.envers.strategy.ValidityAuditStrategy"
/>
Once you configured the ValidityAuditStrategy , the following schema is going to be automatically generated:
As you can see, the REVEND column is added as well as its Foreign key to the REVINFO table.
When rerunning the previous Customer audit log queries against the ValidityAuditStrategy , we get the following results:
Example 634. Getting the first revision for the Customer entity
select
c.id as id1_1_,
c.REV as REV2_1_,
c.REVTYPE as REVTYPE3_1_,
c.REVEND as REVEND4_1_,
c.created_on as created_5_1_,
c.firstName as firstNam6_1_,
c.lastName as lastName7_1_
from
Customer_AUD c
where
c.REV <= ?
and c.REVTYPE <> ?
and (
c.REVEND > ?
or c.REVEND is null
)
Compared to the default strategy, the ValidityAuditStrategy generates simpler queries that can
render better execution plans.
revision number
An integral value ( int/Integer or long/Long ). Essentially, the primary key of the revision
revision timestamp
Either a long/Long or java.util.Date value representing the instant at which the revision was made. When using a
java.util.Date , instead of a long/Long for the revision timestamp, take care not to store it to a column data type which will
lose precision.
Envers handles this information as an entity. By default it uses its own internal class to act as the entity, mapped to the REVINFO
table. You can, however, supply your own approach to collecting this information which might be useful to capture additional
details such as who made a change or the IP address from which the request came. There are two things you need to make this
work:
1. First, you will need to tell Envers about the entity you wish to use. Your entity must use the
@org.hibernate.envers.RevisionEntity annotation. It must define the two attributes described above annotated with
@org.hibernate.envers.RevisionNumber and @org.hibernate.envers.RevisionTimestamp , respectively. You can
extend from org.hibernate.envers.DefaultRevisionEntity , if you wish, to inherit all these required behaviors.
Simply add the custom revision entity as you do your normal entities and Envers will find it.
2. Second, you need to tell Envers how to create instances of your revision entity which is handled by the newRevision(
Object revisionEntity )
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/envers/RevisionListener.html#newRevision-java.lang.Object-) method of the
org.hibernate.envers.RevisionListener interface.
You tell Envers your custom org.hibernate.envers.RevisionListener implementation to use by specifying it on the
@org.hibernate.envers.RevisionEntity annotation, using the value attribute. If your RevisionListener class is
inaccessible from @RevisionEntity (e.g. it exists in a different module), set org.hibernate.envers.revision_listener
property to its fully qualified class name. Class name defined by the configuration parameter overrides the revision entity’s
value attribute.
Considering we have a CurrentUser utility which stores the currently logged user:
JAVA
public static class CurrentUser {
private static final ThreadLocal <String > storage = new ThreadLocal <>();
Now, we need to provide a custom @RevisionEntity to store the currently logged user
@Entity(name = "CustomRevisionEntity")
@Table(name = "CUSTOM_REV_INFO")
@RevisionEntity( CustomRevisionEntityListener .class )
public static class CustomRevisionEntity extends DefaultRevisionEntity {
With the custom RevisionEntity implementation in place, we only need to provide the RevisionEntity implementation
which acts as a factory of RevisionEntity instances.
JAVA
public static class CustomRevisionEntityListener implements RevisionListener {
customRevisionEntity.setUsername(
CurrentUser .INSTANCE.get()
);
}
}
When generating the database schema, Envers creates the following RevisionEntity table:
SQL
create table CUSTOM_REV_INFO (
id integer not null,
timestamp bigint not null,
username varchar(255),
primary key (id)
)
Now, when inserting a Customer entity, Envers generates the following statements:
entityManager.persist( customer );
} );
CurrentUser .INSTANCE.logOut();
SQL
insert
into
Customer
(created_on, firstName, lastName, id)
values
(?, ?, ?, ?)
insert
into
CUSTOM_REV_INFO
(timestamp, username, id)
values
(?, ?, ?)
insert
into
Customer_AUD
(REVTYPE, created_on, firstName, lastName, id, REV)
values
(?, ?, ?, ?, ?, ?)
As demonstrated by the example above, the username is properly set and propagated to the CUSTOM_REV_INFO table.
This strategy is deprecated since version 5.2. The alternative is to use dependency injection
offered as of version 5.3.
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/envers/AuditReader.html#getC urrentRevision-
java.lang.C lass-boolean-)
method of the org.hibernate.envers.AuditReader interface to obtain the current revision, and fill it with desired
information.
The method accepts a persist parameter indicating whether the revision entity should be persisted prior to
returning from this method:
true
ensures that the returned entity has access to its identifier value (revision number), but the revision entity will
be persisted regardless of whether there are any audited entities changed.
false
means that the revision number will be null , but the revision entity will be persisted only if some audited
entities have changed.
This feature is up to the various dependency frameworks, such as CDI and Spring, to supply the
necessary implementation during Hibernate ORM bootstrap to support injection. If no qualifying
implementation is supplied, the RevisionListener will be constructed without injection.
@Entity(name = "CustomTrackingRevisionEntity")
@Table(name = "TRACKING_REV_INFO")
@RevisionEntity
public static class CustomTrackingRevisionEntity
extends DefaultTrackingModifiedEntitiesRevisionEntity {
3. Mark an appropriate field of a custom revision entity with @org.hibernate.envers.ModifiedEntityNames annotation. The
property is required to be of Set<String> type.
JAVA
@Entity(name = "CustomTrackingRevisionEntity")
@Table(name = "TRACKING_REV_INFO")
@RevisionEntity
public static class CustomTrackingRevisionEntity extends DefaultRevisionEntity {
@ElementCollection
@JoinTable(
name = "REVCHANGES",
joinColumns = @JoinColumn( name = "REV" )
)
@Column( name = "ENTITYNAME" )
@ModifiedEntityNames
private Set<String > modifiedEntityNames = new HashSet <>();
JAVA
@Audited
@Entity(name = "Customer")
public static class Customer {
@Id
private Long id;
If the Customer entity class name is changed to ApplicationCustomer , Envers is going to insert a new record in the
REVCHANGES table with the previous entity class name:
@Audited
@Entity(name = "Customer")
public static class ApplicationCustomer {
@Id
private Long id;
SQL
insert
into
REVCHANGES
(REV, ENTITYNAME)
values
(?, ?)
Users, that have chosen one of the approaches listed above, can retrieve all entities modified in a specified revision by utilizing
API described in Querying for entity types modified in a given revision.
Users are also allowed to implement custom mechanisms of tracking modified entity types. In this case, they shall pass their own
implementation of org.hibernate.envers.EntityTrackingRevisionListener interface as the value of
@org.hibernate.envers.RevisionEntity annotation.
EntityTrackingRevisionListener interface exposes one method that notifies whenever audited entity instance has been
added, modified or removed within current revision boundaries.
JAVA
public static class CustomTrackingRevisionListener implements EntityTrackingRevisionListener {
@Override
public void entityChanged(Class entityClass,
String entityName,
Serializable entityId,
RevisionType revisionType,
Object revisionEntity ) {
String type = entityClass.getName();
( (CustomTrackingRevisionEntity ) revisionEntity ).addModifiedEntityType( type );
}
@Override
public void newRevision( Object revisionEntity ) {
}
}
The CustomTrackingRevisionListener adds the fully-qualified class name to the modifiedEntityTypes attribute of the
CustomTrackingRevisionEntity .
JAVA
@Entity(name = "CustomTrackingRevisionEntity")
@Table(name = "TRACKING_REV_INFO")
@RevisionEntity( CustomTrackingRevisionListener .class )
public static class CustomTrackingRevisionEntity {
@Id
@GeneratedValue
@RevisionNumber
private int customId;
@RevisionTimestamp
private long customTimestamp;
@OneToMany(
mappedBy="revision",
cascade={
CascadeType .PERSIST,
CascadeType .REMOVE
}
)
private Set<EntityType > modifiedEntityTypes = new HashSet <>();
Example 644. The EntityType encapsulatets the entity type name before a class name modification
@Entity(name = "EntityType")
public static class EntityType {
@Id
@GeneratedValue
private Integer id;
@ManyToOne
private CustomTrackingRevisionEntity revision;
private EntityType () {
}
Now, when fetching the CustomTrackingRevisionEntity , you cna get access to the previous entity class name.
JAVA
AuditReader auditReader = AuditReaderFactory .get( entityManager );
The feature described in Tracking entity names modified during revisions makes it possible to tell which entities were modified
in a given revision.
The feature described here takes it one step further. Modification Flags enable Envers to track which properties of audited entities
1. setting org.hibernate.envers.global_with_modified_flag configuration property to true . This global switch will cause
adding modification flags to be stored for all audited properties of all audited entities.
The trade-off coming with this functionality is an increased size of audit tables and a very little, almost negligible, performance
drop during audit writes. This is due to the fact that every tracked property has to have an accompanying boolean column in the
schema that stores information about the property modifications. Of course, it is Enver’s job to fill these columns accordingly - no
additional work by the developer is required. Because of costs mentioned, it is recommended to enable the feature selectively,
when needed with use of the granular configuration means described above.
Example 646. Mapping for tracking entity changes at the property level
JAVA
@Audited(withModifiedFlag = true)
@Entity(name = "Customer")
public static class Customer {
@Id
private Long id;
SQL
create table Customer_AUD (
id bigint not null,
REV integer not null,
REVTYPE tinyint,
created_on timestamp,
createdOn_MOD boolean ,
firstName varchar(255),
firstName_MOD boolean ,
lastName varchar(255),
lastName_MOD boolean ,
primary key (id, REV)
)
As you can see, every property features a _MOD column (e.g. createdOn_MOD ) in the audit log.
JAVA
Customer customer = entityManager.find( Customer .class , 1L );
customer.setLastName( "Doe Jr." );
SQL
update
Customer
set
created_on = ?,
firstName = ?,
lastName = ?
where
id = ?
insert
into
REVINFO
(REV, REVTSTMP)
values
(null, ?)
insert
into
Customer_AUD
(REVTYPE, created_on, createdOn_MOD, firstName, firstName_MOD, lastName, lastName_MOD, id, REV)
values
(?, ?, ?, ?, ?, ?, ?, ?, ?)
To see how "Modified Flags" can be utilized, check out the very simple query API that uses them: Querying for entity revisions
that modified a given property.
21.8. Queries
You can think of historic data as having two dimensions:
horizontal
The state of the database at a given revision. Thus, you can query for entities as they were at revision N.
vertical
The revisions, at which entities changed. Hence, you can query for revisions, in which a given entity changed.
The queries in Envers are similar to Hibernate Criteria queries, so if you are common with them, using Envers queries will be
much easier.
The main limitation of the current queries implementation is that you cannot traverse relations. You can only specify constraints
on the ids of the related entities, and only on the "owning" side of the relation. This, however, will be changed in future releases.
The queries on the audited data will be in many cases much slower than corresponding queries on
"live" data, as, especially for the default audit strategy, they involve correlated subselects.
Queries are improved both in terms of speed and possibilities when using the validity audit strategy,
which stores both start and end revisions for entities. See Configuring the ValidityAuditStrategy .
JAVA
Customer customer = (Customer ) AuditReaderFactory
.get( entityManager )
.createQuery()
.forEntitiesAtRevision( Customer .class , revisions.get( 0 ) )
.getSingleResult();
assertEquals("Doe", customer.getLastName());
For example, to select only entities where the firstName property is equal to "John":
Example 649. Getting the Customer audit log with a given firstName attribute value
JAVA
List<Customer > customers = AuditReaderFactory
.get( entityManager )
.createQuery()
.forRevisionsOfEntity( Customer .class , true, true )
.add( AuditEntity .property ( "firstName" ).eq( "John" ) )
.getResultList();
assertEquals(2, customers.size());
assertEquals( "Doe", customers.get( 0 ).getLastName() );
assertEquals( "Doe Jr.", customers.get( 1 ).getLastName() );
And, to select only entities whose relationships are related to a given entity, you can use either the target entity or its identifier.
Example 650. Getting the Customer entities whose address attribute matches the given entity
reference
assertEquals(2, customers.size());
SQL
select
c.id as id1_3_,
c.REV as REV2_3_,
c.REVTYPE as REVTYPE3_3_,
c.REVEND as REVEND4_3_,
c.created_on as created_5_3_,
c.firstName as firstNam6_3_,
c.lastName as lastName7_3_,
c.address_id as address_8_3_
from
Customer_AUD c
where
c.address_id = ?
order by
c.REV asc
The same SQL is generated even if we provide the identifier instead of the target entity reference.
Example 651. Getting the Customer entities whose address identifier matches the given entity
identifier
JAVA
List<Customer > customers = AuditReaderFactory
.get( entityManager )
.createQuery()
.forRevisionsOfEntity( Customer .class , true, true )
.add( AuditEntity .relatedId( "address" ).eq( 1L ) )
.getResultList();
assertEquals(2, customers.size());
Apart for strict equality matching, you can also use an IN clause to provide multiple entity identifiers:
Example 652. Getting the Customer entities whose address identifier matches one of the given entity
identifiers
assertEquals(2, customers.size());
SQL
select
c.id as id1_3_,
c.REV as REV2_3_,
c.REVTYPE as REVTYPE3_3_,
c.REVEND as REVEND4_3_,
c.created_on as created_5_3_,
c.firstName as firstNam6_3_,
c.lastName as lastName7_3_,
c.address_id as address_8_3_
from
Customer_AUD c
where
c.address_id in (
? , ?
)
order by
c.REV asc
You can limit the number of results, order them, and set aggregations and projections (except grouping) in the usual way. When
your query is complete, you can obtain the results by calling the getSingleResult() or getResultList() methods.
Example 653. Getting the Customer entities using filtering and pagination
JAVA
List<Customer > customers = AuditReaderFactory
.get( entityManager )
.createQuery()
.forRevisionsOfEntity( Customer .class , true, true )
.addOrder( AuditEntity .property ( "lastName" ).desc() )
.add( AuditEntity .relatedId( "address" ).eq( 1L ) )
.setFirstResult( 1 )
.setMaxResults( 2 )
.getResultList();
assertEquals(1, customers.size());
select
c.id as id1_3_,
c.REV as REV2_3_,
c.REVTYPE as REVTYPE3_3_,
c.REVEND as REVEND4_3_,
c.created_on as created_5_3_,
c.firstName as firstNam6_3_,
c.lastName as lastName7_3_,
c.address_id as address_8_3_
from
Customer_AUD c
where
c.address_id = ?
order by
c.lastName desc
limit ?
offset ?
JAVA
AuditQuery query = AuditReaderFactory .get( entityManager )
.createQuery()
.forRevisionsOfEntity( Customer .class , false , true );
You can add constraints to this query in the same way as to the previous one.
1. using AuditEntity.revisionNumber() you can specify constraints, projections and order on the revision number, in which
the audited entity was modified
2. similarly, using AuditEntity.revisionProperty( propertyName ) you can specify constraints, projections and order on a
property of the revision entity, corresponding to the revision in which the audited entity was modified
3. AuditEntity.revisionType() gives you access as above to the type of the revision ( ADD , MOD , DEL ).
Using these methods, you can order the query results by revision number, set projection or constraint the revision number to be
greater or less than a specified value, etc. For example, the following query will select the smallest revision number, at which
entity of class MyEntity with id entityId has changed, after revision number 2:
JAVA
Number revision = (Number ) AuditReaderFactory
.get( entityManager )
.createQuery()
.forRevisionsOfEntity( Customer .class , false , true )
.addProjection( AuditEntity .revisionNumber().min() )
.add( AuditEntity .id().eq( 1L ) )
.add( AuditEntity .revisionNumber().gt( 2 ) )
.getSingleResult();
The second additional feature you can use in queries for revisions is the ability to maximize/minimize a property.
For example, if you want to select the smallest possibler revision at which the value of the createdOn attribute was larger then a
given value, you can run the following query:
The minimize() and maximize() methods return a criteria, to which you can add constraints, which must be met by the
entities with the maximized/minimized properties.
You probably also noticed that there are two boolean parameters, passed when creating the query.
selectEntitiesOnly
The first parameter is only valid when you don’t set an explicit projection.
If true, the result of the query will be a list of entities (which changed at revisions satisfying the specified constraints).
the second will be an entity containing revision data (if no custom entity is used, this will be an instance of
DefaultRevisionEntity ).
the third will be the type of the revision (one of the values of the RevisionType enumeration: ADD , MOD , DEL ).
selectDeletedEntities
The second parameter specifies if revisions, in which the entity was deleted should be included in the results.
If yes, such entities will have the revision type DEL and all attributes, except the id , will be set to null .
For example, if you wanted to locate all customers but only wanted to retrieve the instances with the maximum revision number,
you would use the following query:
In other words, the result set would contain a list of Customer instances, one per primary key. Each instance would hold the
audited property data at the maximum revision number for each Customer primary key.
Let’s have a look at various queries that can benefit from these two criteria.
First, you must make sure that your entity can track modification flags:
Example 654. Valid only when audit logging tracks entity attribute modification flags
JAVA
@Audited( withModifiedFlag = true )
The following query will return all revisions of the Customer entity with the given id , for which the lastName property has
changed.
Example 655. Getting all Customer revisions for which the lastName attribute has changed
JAVA
List<Customer > customers = AuditReaderFactory
.get( entityManager )
.createQuery()
.forRevisionsOfEntity( Customer .class , false , true )
.add( AuditEntity .id().eq( 1L ) )
.add( AuditEntity .property ( "lastName" ).hasChanged() )
.getResultList();
select
c.id as id1_3_0_,
c.REV as REV2_3_0_,
defaultrev1_.REV as REV1_4_1_,
c.REVTYPE as REVTYPE3_3_0_,
c.REVEND as REVEND4_3_0_,
c.created_on as created_5_3_0_,
c.createdOn_MOD as createdO6_3_0_,
c.firstName as firstNam7_3_0_,
c.firstName_MOD as firstNam8_3_0_,
c.lastName as lastName9_3_0_,
c.lastName_MOD as lastNam10_3_0_,
c.address_id as address11_3_0_,
c.address_MOD as address12_3_0_,
defaultrev1_.REVTSTMP as REVTSTMP2_4_1_
from
Customer_AUD c cross
join
REVINFO defaultrev1_
where
c.id = ?
and c.lastName_MOD = ?
and c.REV=defaultrev1_.REV
order by
c.REV asc
Using this query we won’t get all other revisions in which lastName wasn’t touched. From the SQL query you can see that the
lastName_MOD column is being used in the WHERE clause, hence the aforementioned requirement for tracking modification
flags.
Of course, nothing prevents users from combining hasChanged condition with some additional criteria.
Example 656. Getting all Customer revisions for which the lastName attribute has changed and the
firstName attribute has not changed
JAVA
List<Customer > customers = AuditReaderFactory
.get( entityManager )
.createQuery()
.forRevisionsOfEntity( Customer .class , false , true )
.add( AuditEntity .id().eq( 1L ) )
.add( AuditEntity .property ( "lastName" ).hasChanged() )
.add( AuditEntity .property ( "firstName" ).hasNotChanged() )
.getResultList();
select
c.id as id1_3_0_,
c.REV as REV2_3_0_,
defaultrev1_.REV as REV1_4_1_,
c.REVTYPE as REVTYPE3_3_0_,
c.REVEND as REVEND4_3_0_,
c.created_on as created_5_3_0_,
c.createdOn_MOD as createdO6_3_0_,
c.firstName as firstNam7_3_0_,
c.firstName_MOD as firstNam8_3_0_,
c.lastName as lastName9_3_0_,
c.lastName_MOD as lastNam10_3_0_,
c.address_id as address11_3_0_,
c.address_MOD as address12_3_0_,
defaultrev1_.REVTSTMP as REVTSTMP2_4_1_
from
Customer_AUD c cross
join
REVINFO defaultrev1_
where
c.id=?
and c.lastName_MOD=?
and c.firstName_MOD=?
and c.REV=defaultrev1_.REV
order by
c.REV asc
To get the Customer entities changed at a given revisionNumber with lastName modified and firstName untouched, we
have to use the forEntitiesModifiedAtRevision query:
Example 657. Getting the Customer entity for a given revision if the lastName attribute has changed
and the firstName attribute has not changed
JAVA
Customer customer = (Customer ) AuditReaderFactory
.get( entityManager )
.createQuery()
.forEntitiesModifiedAtRevision( Customer .class , 2 )
.add( AuditEntity .id().eq( 1L ) )
.add( AuditEntity .property ( "lastName" ).hasChanged() )
.add( AuditEntity .property ( "firstName" ).hasNotChanged() )
.getSingleResult();
select
c.id as id1_3_,
c.REV as REV2_3_,
c.REVTYPE as REVTYPE3_3_,
c.REVEND as REVEND4_3_,
c.created_on as created_5_3_,
c.createdOn_MOD as createdO6_3_,
c.firstName as firstNam7_3_,
c.firstName_MOD as firstNam8_3_,
c.lastName as lastName9_3_,
c.lastName_MOD as lastNam10_3_,
c.address_id as address11_3_,
c.address_MOD as address12_3_
from
Customer_AUD c
where
c.REV=?
and c.id=?
and c.lastName_MOD=?
and c.firstName_MOD=?
21.13. Querying for revisions of entity including property names that were modified
This feature described here is still considered experimental. It is subject to change in future releases
based on user feedback to improve its usefulness.
Sometimes it may be useful to query entity revisions and also determine all the properties of that revision which were modified
without having to issue multiple queries using hasChanged() and hasNotChanged() criteria.
You can now obtain this information easily by using the following query:
JAVA
List results = AuditReaderFactory .get( entityManager )
.createQuery()
.forRevisionsOfEntityWithChanges( Customer .class , false )
.add( AuditEntity .id().eq( 1L ) )
.getResultList();
The methods described below can be used only when the default mechanism of tracking changed
entity types is enabled (see Tracking entity names modified during revisions).
This basic query allows retrieving entity names and corresponding Java classes changed in a specified revision:
Example 659. Retrieving entity names and corresponding Java classes changed in a specified revision
JAVA
assertEquals(
"org.hibernate.userguide.envers.EntityTypeChangeAuditTest$Customer",
AuditReaderFactory
.get( entityManager )
.getCrossTypeRevisionChangesReader()
.findEntityTypes( 1 )
.iterator().next()
.getFirst()
);
assertEquals(
"org.hibernate.userguide.envers.EntityTypeChangeAuditTest$ApplicationCustomer",
AuditReaderFactory
.get( entityManager )
.getCrossTypeRevisionChangesReader()
.findEntityTypes( 2 )
.iterator().next()
.getFirst()
);
Returns snapshots of all audited entities changed (added, updated and removed) in a given revision. Executes N+1 SQL
queries, where N is a number of different entity classes modified within specified revision.
Returns snapshots of all audited entities changed (added, updated or removed) in a given revision filtered by modification type.
Executes N+1 SQL queries, where N is a number of different entity classes modified within specified revision.
Returns a map containing lists of entity snapshots grouped by modification operation (e.g. addition, update and removal).
Executes 3N+1 SQL queries, where N is a number of different entity classes modified within specified revision.
Relation join queries are considered experimental and may change in future releases.
Audit queries support the ability to apply constraints, projections, and sort operations based on entity relations. In order to
traverse entity relations through an audit query, you must use the relation traversal API with a join type.
Relation joins can be applied to many-to-one and many-to-one mappings only when using
JoinType.LEFT or JoinType.INNER .
JAVA
AuditQuery innerJoinAuditQuery = AuditReaderFactory
.get( entityManager )
.createQuery()
.forEntitiesAtRevision( Customer .class , 1 )
.traverseRelation( "address", JoinType .INNER );
JAVA
AuditQuery innerJoinAuditQuery = AuditReaderFactory
.get( entityManager )
.createQuery()
.forEntitiesAtRevision( Customer .class , 1 )
.traverseRelation( "address", JoinType .LEFT );
Like any other query, constraints may be added to restrict the results.
For example, to find a Customers entities at a given revision whose addresses are in România , you can use the following query:
Example 662. Filtering the join relation with a WHERE clause predicate
JAVA
List<Customer > customers = AuditReaderFactory
.get( entityManager )
.createQuery()
.forEntitiesAtRevision( Customer .class , 1 )
.traverseRelation( "address", JoinType .INNER )
.add( AuditEntity .property ( "country" ).eq( "România" ) )
.getResultList();
select
c.id as id1_3_,
c.REV as REV2_3_,
c.REVTYPE as REVTYPE3_3_,
c.REVEND as REVEND4_3_,
c.created_on as created_5_3_,
c.firstName as firstNam6_3_,
c.lastName as lastName7_3_,
c.address_id as address_8_3_
from
Customer_AUD c
inner join
Address_AUD a
on (
c.address_id=a.id
or (
c.address_id is null
)
and (
a.id is null
)
)
where
c.REV<=?
and c.REVTYPE<>?
and (
c.REVEND>?
or c.REVEND is null
)
and a.REV<=?
and a.country=?
and (
a.REVEND>?
or a.REVEND is null
)
For example, to find all Customer entities at a given revision with the country attribute of the address property being România :
Example 663. Filtering a nested join relation with a WHERE clause predicate
JAVA
List<Customer > customers = AuditReaderFactory
.get( entityManager )
.createQuery()
.forEntitiesAtRevision( Customer .class , 1 )
.traverseRelation( "address", JoinType .INNER )
.traverseRelation( "country", JoinType .INNER )
.add( AuditEntity .property ( "name" ).eq( "România" ) )
.getResultList();
assertEquals( 1, customers.size() );
select
cu.id as id1_5_,
cu.REV as REV2_5_,
cu.REVTYPE as REVTYPE3_5_,
cu.REVEND as REVEND4_5_,
cu.created_on as created_5_5_,
cu.firstName as firstNam6_5_,
cu.lastName as lastName7_5_,
cu.address_id as address_8_5_
from
Customer_AUD cu
inner join
Address_AUD a
on (
cu.address_id=a.id
or (
cu.address_id is null
)
and (
a.id is null
)
)
inner join
Country_AUD co
on (
a.country_id=co.id
or (
a.country_id is null
)
and (
co.id is null
)
)
where
cu.REV<=?
and cu.REVTYPE<>?
and (
cu.REVEND>?
or cu.REVEND is null
)
and a.REV<=?
and (
a.REVEND>?
or a.REVEND is null
)
and co.REV<=?
and co.name=?
and (
co.REVEND>?
or co.REVEND is null
)
Constraints may also be added to the properties of nested joined relations, such as testing for null .
For example, the following query illustrates how to find all Customer entities at a given revision having the address in Cluj-
Napoca or the address does not have any country entity reference:
JAVA
List<Customer > customers = AuditReaderFactory
.get( entityManager )
.createQuery()
.forEntitiesAtRevision( Customer .class , 1 )
.traverseRelation( "address", JoinType .LEFT, "a" )
.add(
AuditEntity .or(
AuditEntity .property ( "a", "city" ).eq( "Cluj-Napoca" ),
AuditEntity .relatedId( "country" ).eq( null )
)
)
.getResultList();
select
c.id as id1_5_,
c.REV as REV2_5_,
c.REVTYPE as REVTYPE3_5_,
c.REVEND as REVEND4_5_,
c.created_on as created_5_5_,
c.firstName as firstNam6_5_,
c.lastName as lastName7_5_,
c.address_id as address_8_5_
from
Customer_AUD c
left outer join
Address_AUD a
on (
c.address_id=a.id
or (
c.address_id is null
)
and (
a.id is null
)
)
where
c.REV<=?
and c.REVTYPE<>?
and (
c.REVEND>?
or c.REVEND is null
)
and (
a.REV is null
or a.REV<=?
and (
a.REVEND>?
or a.REVEND is null
)
)
and (
a.city=?
or a.country_id is null
)
Queries can use the up method to navigate back up the entity graph.
For example, the following query will find all Customer entities at a given revision where the country name is România or that
the Customer lives in Cluj-Napoca :
JAVA
List<Customer > customers = AuditReaderFactory
.get( entityManager )
.createQuery()
.forEntitiesAtRevision( Customer .class , 1 )
.traverseRelation( "address", JoinType .INNER, "a" )
.traverseRelation( "country", JoinType .INNER, "cn" )
.up()
.up()
.add(
AuditEntity .disjunction()
.add( AuditEntity .property ( "a", "city" ).eq( "Cluj-Napoca" ) )
.add( AuditEntity .property ( "cn", "name" ).eq( "România" ) )
)
.addOrder( AuditEntity .property ( "createdOn" ).asc() )
.getResultList();
select
cu.id as id1_5_,
cu.REV as REV2_5_,
cu.REVTYPE as REVTYPE3_5_,
cu.REVEND as REVEND4_5_,
cu.created_on as created_5_5_,
cu.firstName as firstNam6_5_,
cu.lastName as lastName7_5_,
cu.address_id as address_8_5_
from
Customer_AUD cu
inner join
Address_AUD a
on (
cu.address_id=a.id
or (
cu.address_id is null
)
and (
a.id is null
)
)
inner join
Country_AUD co
on (
a.country_id=co.id
or (
a.country_id is null
)
and (
co.id is null
)
)
where
cu.REV<=?
and cu.REVTYPE<>?
and (
cu.REVEND>?
or cu.REVEND is null
)
and (
a.city=?
or co.name=?
)
and a.REV<=?
and (
a.REVEND>?
or a.REVEND is null
)
and co.REV<=?
and (
co.REVEND>?
or co.REVEND is null
)
order by
cu.created_on asc
Lastly, this example illustrates how related entity properties can be compared in a single constraint.
Assuming, the Customer and the Address were previously changed as follows:
JAVA
Customer customer = entityManager.createQuery(
"select c " +
"from Customer c " +
"join fetch c.address a " +
"join fetch a.country " +
"where c.id = :id", Customer .class )
.setParameter( "id", 1L )
.getSingleResult();
customer.getAddress().setCity(
customer.getAddress().getCountry().getName()
);
The following query shows how to find the Customer entities where the city property of the address attribute equals the
name of the associated country attribute.
JAVA
List<Number > revisions = AuditReaderFactory .get( entityManager ).getRevisions(
Customer .class ,
1L
);
select
cu.id as id1_5_,
cu.REV as REV2_5_,
cu.REVTYPE as REVTYPE3_5_,
cu.REVEND as REVEND4_5_,
cu.created_on as created_5_5_,
cu.firstName as firstNam6_5_,
cu.lastName as lastName7_5_,
cu.address_id as address_8_5_
from
Customer_AUD cu
inner join
Address_AUD a
on (
cu.address_id=a.id
or (
cu.address_id is null
)
and (
a.id is null
)
)
inner join
Country_AUD cr
on (
a.country_id=cr.id
or (
a.country_id is null
)
and (
cr.id is null
)
)
where
cu.REV<=?
and cu.REVTYPE<>?
and a.city=cr.name
and (
cu.REVEND>?
or cu.REVEND is null
)
and a.REV<=?
and (
a.REVEND>?
or a.REVEND is null
)
and cr.REV<=?
and (
cr.REVEND>?
or cr.REVEND is null
)
get the revision information entity log without instantiating the actual entities themselves.
JAVA
AuditQuery query = getAuditReader().createQuery()
.forRevisionsOfEntity( DefaultRevisionEntity .class , true )
.add( AuditEntity .revisionNumber().between( 1, 25 ) );
This query will return all revision information entities for revisions between 1 and 25 including those which are related to
deletions. If deletions are not of interest, you would pass false as the second argument.
Note that this query uses the DefaultRevisionEntity class type. The class provided will vary depending on the configuration
properties used to configure Envers or if you supply your own revision entity. Typically users who will use this API will likely be
providing a custom revision entity implementation to obtain custom information being maintained per revision.
Conditional auditing can be implemented by overriding some of the Envers event listeners. To use customized Envers event
listeners, the following steps are needed:
1. Turn off automatic Envers event listeners registration by setting the hibernate.envers.autoRegisterListeners Hibernate
property to false .
2. Create subclasses for appropriate event listeners. For example, if you want to conditionally audit entity insertions, extend the
org.hibernate.envers.event.spi.EnversPostInsertEventListenerImpl class. Place the conditional-auditing logic in the
subclasses, call the super method if auditing should be performed.
4. For the integrator to be automatically used when Hibernate starts up, you will need to add a META-INF/services
/org.hibernate.integrator.spi.Integrator file to your jar. The file should contain the fully qualified name of the class
implementing the interface.
id
id of the original entity (this can be more then one column in the case of composite primary keys)
revision number
an integer, which matches to the revision number in the revision entity table.
revision type
The org.hibernate.envers.RevisionType enumeration ordinal stating if the change represents an INSERT, UPDATE or
DELETE.
audited fields
properties from the original entity being audited
The primary key of the audit table is the combination of the original id of the entity and the revision number, so there can be at
most one historic entry for a given entity instance at a given revision.
The current entity data is stored in the original table and in the audit table. This is a duplication of data, however as this solution
makes the query system much more powerful, and as memory is cheap, hopefully, this won’t be a major drawback for the users.
A row in the audit table with entity id ID , revision N , and data D means: entity with id ID has data D from revision N
upwards. Hence, if we want to find an entity at revision M , we have to search for a row in the audit table, which has the revision
number smaller or equal to M , but as large as possible. If no such row is found, or a row with a "deleted" marker is found, it
means that the entity didn’t exist at that revision.
The "revision type" field can currently have three values: 0 , 1 and 2 , which means ADD , MOD , and DEL , respectively. A row
with a revision of type DEL will only contain the id of the entity and no data (all fields NULL ), as it only serves as a marker saying
"this entity was deleted at that revision".
Additionally, there is a revision entity table which contains the information about the global revision. By default, the generated
table is named REVINFO and contains just two columns: ID and TIMESTAMP . A row is inserted into this table on each new
revision, that is, on each commit of a transaction, which changes audited data. The name of this table can be configured, the name
of its columns as well as adding additional columns can be achieved as discussed in Revision Log.
While global revisions are a good way to provide correct auditing of relations, some people have
pointed out that this may be a bottleneck in systems, where data is very often modified.
One viable solution is to introduce an option to have an entity "locally revisioned", that is revisions
would be created for it independently. This would not enable correct versioning of relations, but it would work
without the REVINFO table.
Another possibility is to introduce a notion of "revisioning groups", which would group entities sharing the same
revision numbering. Each such group would have to consist of one or more strongly connected components
belonging to the entity graph induced by relations between entities.
This task will generate the definitions of all entities, both of which are audited by Envers and those which are not.
For the following entities, Hibernate is going to generate the following database schema:
@Audited
@Entity(name = "Customer")
public static class Customer {
@Id
private Long id;
@Audited
@Entity(name = "Address")
public static class Address {
@Id
private Long id;
@Audited
@Entity(name = "Country")
public static class Country {
@Id
private Long id;
references Country
In case of bags, however (which require a join table), if there is a duplicate element, the two tuples corresponding to the elements
will be the same. Although Hibernate allows this, Envers (or more precisely: the database connector) will throw an exception
when trying to persist two identical elements because of a unique constraint violation.
There are at least two ways out if you need bag semantics:
To be able to name the additional join table, there is a special annotation: @AuditJoinTable , which has similar semantics to JPA
@JoinTable .
One special case is to have relations mapped with @OneToMany with @JoinColumn on the one side, and @ManyToOne and
@JoinColumn( insertable=false, updatable=false ) on the many side. Such relations are, in fact, bidirectional, but the
owning side is the collection.
To properly audit such relations with Envers, you can use the @AuditMappedBy annotation. It enables you to specify the reverse
property (using the mappedBy element). In case of indexed collections, the index column must also be mapped in the referenced
entity (using @Column( insertable=false, updatable=false ) , and specified using positionMappedBy . This annotation will
affect only the way Envers works. Please note that the annotation is experimental and may change in the future.
1. Improved query performance by selectively moving rows to various partitions (or even purging old rows)
org.hibernate.envers.audit_strategy = org.hibernate.envers.strategy.ValidityAuditStrategy
org.hibernate.envers.audit_strategy_validity_store_revend_timestamp = true
Optionally, you can also override the default values using following properties:
org.hibernate.envers.audit_strategy_validity_end_rev_field_name
org.hibernate.envers.audit_strategy_validity_revend_timestamp_field_name
The reason why the end revision information should be used for audit table partitioning is based on the assumption that audit
tables should be partitioned on an 'increasing level of relevancy', like so:
1. A couple of partitions with audit data that is not very (or no longer) relevant. This can be stored on slow media, and perhaps
even be purged eventually.
3. One partition for audit data that is most likely to be relevant. This should be stored on the fastest media, both for reading and
writing.
Currently, the salary table contains the following rows for a certain person X:
2006 3300
2007 3500
2008 4000
2009 4500
The salary for the current fiscal year (2010) is unknown. The agency requires that all changes in registered salaries for a fiscal
year are recorded (i.e. an audit trail). The rationale behind this is that decisions made at a certain date are based on the registered
salary at that time. And at any time it must be possible reproduce the reason why a certain decision was made at a certain date.
1. For the fiscal year 2006, there is only one revision. It has the oldest revision timestamp of all audit rows, but should still be
regarded as relevant because it’s the latest modification for this fiscal year in the salary table (its end revision timestamp is
null).
Also, note that it would be very unfortunate if in 2011 there would be an update of the salary for the fiscal year 2006 (which is
possible until at least 10 years after the fiscal year), and the audit information would have been moved to a slow disk (based
on the age of the revision timestamp). Remember that, in this case, Envers will have to update the end revision timestamp of
the most recent audit row.
2. There are two revisions in the salary of the fiscal year 2007 which both have nearly the same revision timestamp and a
different end revision timestamp.
On first sight, it is evident that the first revision was a mistake and probably not relevant. The only relevant revision for 2007 is
the one with end revision timestamp null.
Based on the above, it is evident that only the end revision timestamp is suitable for audit table partitioning. The revision
timestamp is not suitable.
This partitioning scheme also covers the potential problem of the update of the end revision timestamp, which occurs if a row in
the audited table is modified. Even though Envers will update the end revision timestamp of the audit row to the system date at
the instant of modification, the audit row will remain in the same partition (the 'extension bucket').
And sometime in 2011, the last partition (or 'extension bucket') is split into two new partitions:
1. end revision timestamp year = 2010:: This partition contains audit data that is potentially relevant (in 2011).
2. end revision timestamp year >= 2011 or null:: This partition contains the most interesting audit data and is the new 'extension
bucket'.
2. Forum (http://community.jboss.org/en/envers?view=discussions)
3. JIRA issue tracker (https://hibernate.atlassian.net/) (when adding issues concerning Envers, be sure to select the "envers"
component!)
5. FAQ (https://community.jboss.org/wiki/EnversFAQ)
22.2. Dialect
The first line of portability for Hibernate is the dialect, which is a specialization of the org.hibernate.dialect.Dialect
contract. A dialect encapsulates all the differences in how Hibernate must communicate with a particular database to accomplish
some task like getting a sequence value or structuring a SELECT query. Hibernate bundles a wide range of dialects for many of the
most popular databases. If you find that your particular database is not among them, it is not terribly difficult to write your own.
Starting with version 3.2, Hibernate introduced the notion of automatically detecting the dialect to use based on the
java.sql.DatabaseMetaData obtained from a java.sql.Connection to that database. This was much better, except that this
resolution was limited to databases Hibernate know about ahead of time and was in no way configurable or overrideable.
Starting with version 3.3, Hibernate has a fare more powerful way to automatically determine which dialect to should be used by
relying on a series of delegates which implement the org.hibernate.dialect.resolver.DialectResolver which defines only
a single method:
JAVA
public Dialect resolveDialect(DatabaseMetaData metaData) throws JDBCConnectionException
The basic contract here is that if the resolver 'understands' the given database metadata then it returns the corresponding Dialect;
if not it returns null and the process continues to the next resolver. The signature also identifies
org.hibernate.exception.JDBCConnectionException as possibly being thrown. A JDBCConnectionException here is
interpreted to imply a non-transient (aka non-recoverable) connection problem and is used to indicate an immediate stop to
resolution attempts. All other exceptions result in a warning and continuing on to the next resolver.
The cool part about these resolvers is that users can also register their own custom resolvers which will be processed ahead of the
built-in Hibernate ones. This might be useful in a number of different situations:
it allows easy integration for auto-detection of dialects beyond those shipped with Hibernate itself
it allows you to specify to use a custom dialect when a particular database is recognized.
To register one or more resolvers, simply specify them (separated by commas, tabs or spaces) using the
'hibernate.dialect_resolvers' configuration setting (see the DIALECT_RESOLVERS constant on org.hibernate.cfg.Environment ).
However, an insidious implication of this approach comes about when targeting some databases which support identity
generation and some which do not. identity generation relies on the SQL definition of an IDENTITY (or auto-increment) column to
manage the identifier value. It is what is known as a post-insert generation strategy because the insert must actually happen
before we can know the identifier value.
Because Hibernate relies on this identifier value to uniquely reference entities within a persistence context, it must then issue the
insert immediately when the user requests that the entity be associated with the session (e.g. like via save() or persist() ),
regardless of current transactional semantics.
Hibernate was changed slightly, once the implications of this were better understood, so now the
insert could be delayed in cases where this is feasible.
The underlying issue is that the actual semantics of the application itself changes in these cases.
Starting with version 3.2.3, Hibernate comes with a set of enhanced (http://in.relation.to/2082.lace) identifier generators targeting
portability in a much different way.
org.hibernate.id.enhanced.SequenceStyleGenerator
org.hibernate.id.enhanced.TableGenerator
The idea behind these generators is to port the actual semantics of the identifier value generation to the different databases. For
example, the org.hibernate.id.enhanced.SequenceStyleGenerator mimics the behavior of a sequence on databases which
do not support sequences by using a table.
This is an area in Hibernate in need of improvement. In terms of portability concerns, this function
handling currently works pretty well in HQL, however, it is quite lacking in all other aspects.
SQL functions can be referenced in many ways by users. However, not all databases support the same set of functions. Hibernate,
provides a means of mapping a logical function name to a delegate which knows how to render that particular function, perhaps
even using a totally different physical function call.
It is sort of implemented such that users can programmatically register functions with the
org.hibernate.cfg.Configuration and those functions will be recognized for HQL.
HQL/JPQL differences
naming strategies
basic types
simple id types
generated id types
23. Configurations
strategy instance
An instance of the strategy implementation to use can be specified
This includes both in terms of parsing or translating a query as well as calls to the javax.persistence.Query methods
throwing spec defined exceptions whereas Hibernate might not.
If enabled, we will recognize it as a List where javax.persistence.OrderColumn is just missing (and its defaults will apply).
This setting controls whether the JPA spec-defined behavior or the Hibernate behavior will be used.
If enabled, Hibernate will operate in the JPA specified way, throwing exceptions when the spec says it should.
Traditionally, Hibernate does not initialize an entity proxy when accessing its identifier since we already know the identifier
value, hence we can save a database roundtrip.
If enabled Hibernate will initialize the entity proxy even when accessing its identifier.
If enabled, the names used by @TableGenerator and @SequenceGenerator will be considered global so configuring two
different generators with the same name will cause a `java.lang.IllegalArgumentException' to be thrown at boot time.
hibernate.connection.username or javax.persistence.jdbc.user
Names the JDBC connection user name.
hibernate.connection.password or javax.persistence.jdbc.password
Names the JDBC connection password.
Hibernate is configured to get Connections from an underlying DataSource, and that DataSource is already configured to
Hibernate is configured to get Connections from a non-DataSource connection pool and that connection pool is already
configured to disable auto-commit. For the Hibernate provided implementation this will depend on the value of
hibernate.connection.autocommit setting.
Hibernate uses this assurance as an opportunity to opt-out of certain operations that may have a performance impact
(although this impact is generally negligible). Specifically, when a transaction is started via the Hibernate or JPA transaction
APIs Hibernate will generally immediately acquire a Connection from the provider and:
check whether the Connection is initially in auto-commit mode via a call to Connection#getAutocommit to know how to
clean up the Connection when released.
We can skip both of those steps if we know that the ConnectionProvider will always return Connections with auto-commit
disabled. That is the purpose of this setting. By setting it to true , the Connection acquisition can be delayed until the first
SQL statement is needed to be executed. The connection acquisition delay allows you to reduce the database connection
lease time, therefore allowing you to increase the transaction throughput.
It is inappropriate to set this value to true when the Connections Hibernate gets from the provider do not, in fact,
have auto-commit disabled.
Doing so will lead to Hibernate executing SQL operations outside of any JDBC/SQL transaction.
hibernate.connection.datasource
Either a javax.sql.DataSource instance or a JNDI name under which to locate the DataSource .
hibernate.connection
Names a prefix used to define arbitrary JDBC connection properties. These properties are passed along to the JDBC provider
when creating a connection.
Can reference:
an instance of ConnectionProvider
The term class appears in the setting name due to legacy reasons. However, it can accept instances.
hibernate.jndi.class
hibernate.jndi
hibernate.c3p0.max_size (e.g. 5)
Maximum size of C3P0 connection pool. Refers to c3p0 maxPoolSize setting (http://www.mchange.com/projects/c3p0/#maxPoolSize).
hibernate.c3p0.max_statements (e.g. 5)
Maximum size of C3P0 statement cache. Refers to c3p0 maxStatements setting
(http://www.mchange.com/projects/c3p0/#maxStatements).
hibernate.c3p0.acquire_increment (e.g. 2)
The number of connections acquired at a time when there’s no connection available in the pool. Refers to c3p0
acquireIncrement setting (http://www.mchange.com/projects/c3p0/#acquireIncrement).
hibernate.c3p0.idle_test_period (e.g. 5)
Idle time before a C3P0 pooled connection is validated. Refers to c3p0 idleConnectionTestPeriod setting
(http://www.mchange.com/projects/c3p0/#idleConnectionTestPeriod).
hibernate.c3p0
A setting prefix used to indicate additional c3p0 properties that need to be passed to the underlying c3p0 connection pool.
Existing applications may want to disable this (set it false ) for upgrade compatibility from 3.x and 4.x to 5.x.
When a generator specified an increment-size and an optimizer was not explicitly specified, which of the pooled optimizers
should be preferred?
be used.
The default value is true meaning that @GeneratedValue.generator() will be used as the sequence/table name by default.
Users migrating from earlier versions using the legacy hibernate_sequence name should disable this setting.
Because want to make sure that legacy applications continue to work as well, that puts us in a bind in terms of how to handle
implicit discriminator mappings. The solution is to assume that the absence of discriminator metadata means to follow the
legacy behavior unless this setting is enabled.
With this setting enabled, Hibernate will interpret the absence of discriminator metadata as an indication to use the JPA-
defined defaults for these absent annotations.
See Hibernate Jira issue HHH-6911 (https://hibernate.atlassian.net/browse/HHH-6911) for additional background info.
Existing applications rely (implicitly or explicitly) on Hibernate ignoring any DiscriminatorColumn declarations on joined
inheritance hierarchies. This setting allows these applications to maintain the legacy behavior of DiscriminatorColumn
annotations being ignored when paired with joined inheritance.
See Hibernate Jira issue HHH-6911 (https://hibernate.atlassian.net/browse/HHH-6911) for additional background info.
default
jpa
legacy-jpa
legacy-hbm
component-path
If this property happens to be empty, the fallback is to use the default strategy.
hibernate.physical_naming_strategy (e.g.
org.hibernate.boot.model.naming.PhysicalNamingStrategyStandardImpl (default value))
hibernate.archive.scanner
Accepts either:
hibernate.archive.interpreter
Pass ArchiveDescriptorFactory
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/boot/archive/spi/ArchiveDescriptorFactory.html) to use in the scanning
process.
Accepts either:
class
hbm
scan hbm mapping files (e.g. hbm.xml ) to extract entity mapping metadata
By default both HBM, annotations, and JPA XML mappings are scanned.
When using JPA, to disable the automatic scanning of all entity classes, the exclude-unlisted-classes persistence.xml
element must be set to true. Therefore, when setting exclude-unlisted-classes to true, only the classes that are explicitly
declared in the persistence.xml configuration files are going to be taken into consideration.
The default is hbm,class" , therefore hbm.xml files are processed first, followed by annotations (combined with orm.xml
mappings).
When using JPA, the XML mapping overrides a conflicting annotation mapping that targets the same entity attribute.
Unless specified, the JDBC Driver uses the default JVM time zone. If a different time zone is configured via this setting, the JDBC
PreparedStatement#setTimestamp
(https://docs.oracle.com/javase/8/docs/api/java/sql/PreparedStatement.html#setTimestamp-int-java.sql.Timestamp-java.util.Calendar-) is going to
use a Calendar instance according to the specified time zone.
Default to false if Bean Validation is present in the classpath and Hibernate Annotations is used, true otherwise.
To disable constraint propagation to DDL, set up hibernate.validator.apply_to_ddl to false in the configuration file.
Such a need is very uncommon and not recommended.
This is an experimental feature that has known issues. It should not be used in production until it is stabilized. See Hibernate
Jira issue HHH-11936 (https://hibernate.atlassian.net/browse/HHH-11936) for details.
Should we strictly adhere to JPA Query Language (JPQL) syntax, or more broadly support all of Hibernate’s superset (HQL)?
Setting this to true may cause valid HQL to throw an exception because it violates the JPQL subset.
This defines a global setting, which can then be controlled per parameter via
org.hibernate.procedure.ParameterRegistration#enablePassingNulls(boolean)
Values are true (pass the NULLs) or false (do not pass the NULLs).
The org.hibernate.query.criteria.LiteralHandlingMode#BIND mode will use bind variables for any literal value. The
org.hibernate.query.criteria.LiteralHandlingMode#INLINE mode will inline literal values as-is.
Valid options are defined by the org.hibernate.query.criteria.LiteralHandlingMode enum. The default value is
org.hibernate.query.criteria.LiteralHandlingMode#AUTO .
This configuration property allows you to DROP the tables used for multi-table bulk HQL operations when the
SessionFactory or the EntityManagerFactory is closed.
This configuration property defines the database schema used for storing the temporary tables used for bulk HQL operations.
This configuration property defines the database catalog used for storing the temporary tables used for bulk HQL operations.
Legacy 4.x behavior favored performing pagination in-memory by avoiding the use of the offset value, which is overall poor
performance. In 5.x, the limit handler behavior favors performance, thus, if the dialect doesn’t support offsets, an exception is
thrown instead.
The default is true . Existing applications may want to disable this (set it false ) if non-conventional Java constants are used.
However, there is a significant performance overhead for using non-conventional Java constants since Hibernate cannot
determine if aliases should be treated as Java constants or not.
Set this property to true if your JDBC driver returns correct row counts from executeBatch(). This option is usually safe, but is
disabled by default. If enabled, Hibernate uses batched DML for automatically versioned data.
hibernate.default_batch_fetch_size (e.g. 4 , 8 , or 16 )
The default size for Hibernate Batch fetching of associations (lazily fetched associations can be fetched in batches to prevent
N+1 query problems).
Although enabling this configuration can make LazyInitializationException go away, it’s better to use a fetch plan that
guarantees that all properties are properly initialized before the Session is closed.
Write all SQL statements to the console. This is an alternative to setting the log category org.hibernate.SQL to debug.
The default value of this setting is determined by the value for hibernate.generate_statistics , meaning that if statistics
are enabled, then logging of Session metrics is enabled by default too.
hibernate.cache.default_cache_concurrency_strategy
hibernate.ejb.collectioncache (e.g.
hibernate.ejb.collectioncache.org.hibernate.ejb.test.Item.distributors = read-write, RegionName )
Sets the associated collection cache concurrency strategy for the designated region. Caching configuration should follow the
following pattern hibernate.ejb.collectioncache.<fully.qualified.Classname>.<role> usage[, region] where usage is
the cache strategy used and region the cache region name
hibernate.transaction.jta.platform_resolver
hibernate.transaction.coordinator_class
jdbc
jta
If a JPA application does not provide a setting for hibernate.transaction.coordinator_class , Hibernate will
automatically build the proper transaction coordinator based on the transaction type for the persistence unit.
If a non-JPA application does not provide a setting for hibernate.transaction.coordinator_class , Hibernate will use
jdbc as the default. This default will cause problems if the application actually uses JTA-based transactions. A non-JPA
application that uses JTA-based transactions should explicitly set hibernate.transaction.coordinator_class=jta or
provide a custom TransactionCoordinatorBuilder
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/resource/transaction/TransactionCoordinatorBuilder.html) that builds a
TransactionCoordinator
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/resource/transaction/TransactionCoordinator.html) that properly
coordinates with JTA-based transactions.
The ability to handle this situation requires checking the Thread ID every time Session is called, so enabling this can certainly
have a performance impact.
hibernate.transaction.factory_class
This is a legacy setting that’s been deprecated and you should use the
hibernate.transaction.jta.platform instead.
It allows access to the underlying org.hibernate.Transaction even when using JTA since the JPA specification prohibits this
behavior.
If this configuration property is set to true , access is granted to the underlying org.hibernate.Transaction . If it’s set to
false , you won’t be able to access the org.hibernate.Transaction .
The default behavior is to allow access unless the Session is bootstrapped via JPA.
hibernate.tenant_identifier_resolver
Names a CurrentTenantIdentifierResolver
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/context/spi/CurrentTenantIdentifierResolver.html) implementation to
resolve the resolve the current tenant identifier so that calling SessionFactory#openSession() would get a Session that’s
connected to the right tenant.
update )
Setting to perform SchemaManagementTool actions automatically as part of the SessionFactory lifecycle. Valid options are
defined by the externalHbm2ddlName value of the Action
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/tool/schema/Action.html) enum:
none
create-only
drop
create
create-drop
Drop the schema and recreate it on SessionFactory startup. Additionally, drop the schema on SessionFactory shutdown.
validate
update
Setting to perform SchemaManagementTool actions automatically as part of the SessionFactory lifecycle. Valid options are
defined by the externalJpaName value of the Action
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/tool/schema/Action.html) enum:
none
create
drop
drop-and-create
Setting to perform SchemaManagementTool actions writing the commands into a DDL script file. Valid options are defined by
the externalJpaName value of the Action (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/tool/schema/Action.html)
enum:
none
create
drop
drop-and-create
javax.persistence.schema-generation-connection
javax.persistence.database-product-name
Specifies the name of the database provider in cases where a Connection to the underlying database is not available (aka,
mainly in generating scripts). In such cases, a value for this setting must be specified.
javax.persistence.database-major-version
This value is used to help more precisely determine how to perform schema generation tasks for the underlying database in
cases where javax.persistence.database-product-name does not provide enough distinction.
javax.persistence.database-minor-version
This value is used to help more precisely determine how to perform schema generation tasks for the underlying database in
cases where javax.persistence.database-product-name and javax.persistence.database-major-version does not
provide enough distinction.
javax.persistence.schema-generation.create-source
Specifies whether schema generation commands for schema creation are to be determined based on object/relational mapping
metadata, DDL scripts, or a combination of the two. See SourceType
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/tool/schema/SourceType.html) for valid set of values.
javax.persistence.schema-generation.drop-source
Specifies whether schema generation commands for schema dropping are to be determined based on object/relational
mapping metadata, DDL scripts, or a combination of the two. See SourceType
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/tool/schema/SourceType.html) for valid set of values.
javax.persistence.schema-generation.create-script-source
Specifies the create script file as either a java.io.Reader configured for reading of the DDL script file or a string
designating a file java.net.URL for the DDL script.
javax.persistence.schema-generation.drop-script-source
Specifies the drop script file as either a java.io.Reader configured for reading of the DDL script file or a string designating a
file java.net.URL for the DDL script.
javax.persistence.schema-generation.scripts.create-target
For cases where the javax.persistence.schema-generation.scripts.action value indicates that schema creation
commands should be written to DDL script file, javax.persistence.schema-generation.scripts.create-target specifies
either a java.io.Writer configured for output of the DDL script or a string specifying the file URL for the DDL script.
javax.persistence.schema-generation.scripts.drop-target
For cases where the javax.persistence.schema-generation.scripts.action value indicates that schema dropping
commands should be written to DDL script file, javax.persistence.schema-generation.scripts.drop-target specifies
either a java.io.Writer configured for output of the DDL script or a string specifying the file URL for the DDL script.
These statements are only executed if the schema is created, meaning that hibernate.hbm2ddl.auto is set to create ,
create-drop , or update . javax.persistence.schema-generation.create-script-source /
javax.persistence.schema-generation.drop-script-source should be preferred.
javax.persistence.sql-load-script-source
JPA variant of hibernate.hbm2ddl.import_files . Specifies a java.io.Reader configured for reading of the SQL load script
or a string designating the file java.net.URL for the SQL load script. A "SQL load script" is a script that performs some
database initialization (INSERT, etc).
hibernate.hbm2ddl.import_files_sql_extractor
Reference may refer to an instance, a Class implementing ImportSqlCommandExtractor of the fully-qualified name of the
ImportSqlCommandExtractor implementation. If the fully-qualified name is given, the implementation must provide a no-arg
constructor.
If this property is not supplied (or is explicitly false ), the provider should not attempt to create database schemas.
hibernate.hbm2ddl.schema_filter_provider
grouped
individually
hibernate.hbm2ddl.delimiter (e.g. ; )
Identifies the delimiter to use to separate schema management statements in script outputs.
If enabled, allows schema update and validation to support synonyms. Due to the possibility that this would return duplicate
tables (especially in Oracle), this is disabled by default.
DROP_RECREATE_QUIETLY
Default option. Attempt to drop, then (re-)create each unique constraint. Ignore any exceptions being thrown.
RECREATE_QUIETLY
Attempts to (re-)create unique constraints, ignoring exceptions thrown if the constraint already existed
SKIP
EntityManagerFactory or SessionFactory are created even if the schema migration throws exceptions. To prevent this
default behavior, set this property value to true .
Can reference:
Interceptor instance
This setting identifies an Interceptor implementation that is to be applied to every Session opened from the
SessionFactory , but unlike hibernate.session_factory.interceptor , a unique instance of the Interceptor is used for
each Session .
Can reference:
Interceptor instance
The interceptor instance is specific to a given Session instance (and hence is not thread-safe) has to implement
org.hibernate.Interceptor and have a no-arg constructor.
Event listener list for a given event type. The list of event listeners is a comma separated fully qualified class name list.
hibernate.jmx.agentId
hibernate.jmx.defaultDomain
hibernate.jmx.sessionFactoryName
The SessionFactory name appended to the object name the Manageable Bean is registered with. If null, the
hibernate.session_factory_name configuration value is used.
org.hibernate.core
The default object domain appended to the object name the Manageable Bean is registered with.
The property name defines the role (e.g. allowed ) and the entity class name (e.g. org.jboss.ejb3.test.jacc.AllEntity ),
while the property value defines the authorized actions (e.g. insert,update,read ).
hibernate.jacc_context_id
A String identifying the policy context whose PolicyConfiguration interface is to be returned. The value passed to this
parameter must not be null.
Used to define a java.util.Collection<ClassLoader> or the ClassLoader instance Hibernate should use for class-loading
and resource-lookups.
hibernate.classLoader.application
hibernate.classLoader.resources
hibernate.classLoader.hibernate
Names the ClassLoader responsible for loading Hibernate classes. By default, this is the ClassLoader that loaded this class.
hibernate.classLoader.environment
Names the ClassLoader used when Hibernate is unable to locates classes on the hibernate.classLoader.application or
hibernate.classLoader.hibernate .
Used to define an implementation of the PersisterClassResolver interface which can be used to customize how an entity or
a collection is being persisted.
Like a PersisterClassResolver , the PersisterFactory can be used to customize how an entity or a collection are being
persisted.
hibernate.metadata_builder_contributor (e.g. The instance, the class or the fully qualified class name of a
MetadataBuilderContributor
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/jpa/boot/spi/MetadataBuilderC ontributor.html))
Used to define an instance, the class or the fully qualified class name of a MetadataBuilderContributor
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/jpa/boot/spi/MetadataBuilderContributor.html) which can be used to
configure the MetadataBuilder when bootstrapping via the JPA EntityManagerFactory .
If hibernate.session_factory_name_is_jndi is set to true , this is also the name under which the SessionFactory is
bound into JNDI on startup and from which it can be obtained from JNDI.
Defaults to true for backward compatibility. Set this to false if naming a SessionFactory is needed for serialization purposes,
but no writable JNDI context exists in the runtime environment or if the user simply does not want JNDI to be used.
enabled
Do the build
disabled
Do not do the build
ignoreUnsupported
Do the build, but ignore any non-JPA features that would otherwise result in a failure (e.g. @Any annotation).
enabled
Do the population
disabled
Do not do the population
skipUnsupported
Do the population, but ignore any non-JPA features that would otherwise result in the population failing (e.g. @Any
annotation).
Note that, for CDI-based containers, setting this is not necessary. Simply pass the BeanManager to use via
javax.persistence.bean.manager and optionally specify hibernate.delay_cdi_access .
This setting is more meant to integrate non-CDI bean containers such as Spring.
true
allows to flush an update out of a transaction
false
does not allow
allow
performs the merge operation on each entity copy that is detected
log
(provided for testing only) performs the merge operation on each entity copy that is detected and logs information about the
entity copies. This setting requires DEBUG logging be enabled for EntityCopyAllowedLoggedObserver
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/event/internal/EntityCopyAllowedLoggedObserver.html).
In addition, the application may customize the behavior by providing an implementation of EntityCopyObserver
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/event/spi/EntityCopyObserver.html) and setting
hibernate.event.merge.entity_copy_observer to the class name. When this property is set to allow or log , Hibernate
will merge each entity copy detected while cascading the merge operation. In the process of merging each entity copy,
Hibernate will cascade the merge operation from each entity copy to its associations with cascade=CascadeType.MERGE or
CascadeType.ALL . The entity state resulting from merging an entity copy will be overwritten when another entity copy is
merged.
hibernate.listeners.envers.autoRegister
24.1.2. @AssociationOverride
The @AssociationOverride (http://docs.oracle.com/javaee/7/api/javax/persistence/AssociationOverride.html) annotation is used to
override an association mapping (e.g. @ManyToOne , @OneToOne , @OneToMany , @ManyToMany ) inherited from a mapped
superclass or an embeddable.
24.1.3. @AssociationOverrides
The @AssociationOverrides (http://docs.oracle.com/javaee/7/api/javax/persistence/AssociationOverrides.html) is used to group several
@AssociationOverride annotations.
24.1.4. @AttributeOverride
The @AttributeOverride (http://docs.oracle.com/javaee/7/api/javax/persistence/AttributeOverride.html) annotation is used to override an
attribute mapping inherited from a mapped superclass or an embeddable.
24.1.5. @AttributeOverrides
The @AttributeOverrides (http://docs.oracle.com/javaee/7/api/javax/persistence/AttributeOverrides.html) is used to group several
@AttributeOverride annotations.
24.1.6. @Basic
The @Basic (http://docs.oracle.com/javaee/7/api/javax/persistence/Basic.html) annotation is used to map a basic attribute type to a
database column.
24.1.7. @Cacheable
The @Cacheable (http://docs.oracle.com/javaee/7/api/javax/persistence/Cacheable.html) annotation is used to specify whether an entity
should be stored in the second-level cache.
If the persistence.xml shared-cache-mode XML attribute is set to ENABLE_SELECTIVE , then only the entities annotated with
the @Cacheable are going to be stored in the second-level cache.
If shared-cache-mode XML attribute value is DISABLE_SELECTIVE , then the entities marked with the @Cacheable annotation
are not going to be stored in the second-level cache, while all the other entities are stored in the cache.
24.1.8. @CollectionTable
The @CollectionTable (http://docs.oracle.com/javaee/7/api/javax/persistence/CollectionTable.html) annotation is used to specify the
database table that stores the values of a basic or an embeddable type collection.
24.1.9. @Column
The @Column (http://docs.oracle.com/javaee/7/api/javax/persistence/Column.html) annotation is used to specify the mapping between a
basic entity attribute and the database table column.
24.1.10. @ColumnResult
The @ColumnResult (http://docs.oracle.com/javaee/7/api/javax/persistence/ColumnResult.html) annotation is used in conjunction with the
@SqlResultSetMapping or @ConstructorResult annotations to map a SQL column for a given SELECT query.
See the Entity associations with named native queries section for more info.
24.1.11. @ConstructorResult
The @ConstructorResult (http://docs.oracle.com/javaee/7/api/javax/persistence/ConstructorResult.html) annotation is used in conjunction
with the @SqlResultSetMapping annotations to map columns of a given SELECT query to a certain object constructor.
See the Multiple scalar values NamedNativeQuery with ConstructorResult section for more info.
24.1.12. @Convert
The @Convert (http://docs.oracle.com/javaee/7/api/javax/persistence/Convert.html) annotation is used to specify the
AttributeConverter (http://docs.oracle.com/javaee/7/api/javax/persistence/AttributeConverter.html) implementation used to convert the
currently annotated basic attribute.
24.1.13. @Converter
The @Converter (http://docs.oracle.com/javaee/7/api/javax/persistence/Converter.html) annotation is used to specify that the current
annotate AttributeConverter (http://docs.oracle.com/javaee/7/api/javax/persistence/AttributeConverter.html) implementation can be
used as a JPA basic attribute converter.
24.1.14. @Converts
The @Converts (http://docs.oracle.com/javaee/7/api/javax/persistence/Converts.html) annotation is used to group multiple @Convert
annotations.
24.1.15. @DiscriminatorColumn
The @DiscriminatorColumn (http://docs.oracle.com/javaee/7/api/javax/persistence/DiscriminatorColumn.html) annotation is used to
specify the discriminator column name and the discriminator type
(http://docs.oracle.com/javaee/7/api/javax/persistence/DiscriminatorColumn.html#discriminatorType--) for the SINGLE_TABLE and JOINED
Inheritance strategies.
24.1.16. @DiscriminatorValue
The @DiscriminatorValue (http://docs.oracle.com/javaee/7/api/javax/persistence/DiscriminatorValue.html) annotation is used to specify
what value of the discriminator column is used for mapping the currently annotated entity.
24.1.17. @ElementCollection
The @ElementCollection (http://docs.oracle.com/javaee/7/api/javax/persistence/ElementCollection.html) annotation is used to specify a
collection of a basic or embeddable types.
24.1.18. @Embeddable
The @Embeddable (http://docs.oracle.com/javaee/7/api/javax/persistence/Embeddable.html) annotation is used to specify embeddable
types. Like basic types, embeddable types do not have any identity, being managed by their owning entity.
24.1.19. @Embedded
The @Embedded (http://docs.oracle.com/javaee/7/api/javax/persistence/Embedded.html) annotation is used to specify that a given entity
attribute represents an embeddable type.
24.1.20. @EmbeddedId
The @EmbeddedId (http://docs.oracle.com/javaee/7/api/javax/persistence/EmbeddedId.html) annotation is used to specify the entity
identifier is an embeddable type.
See the Composite identifiers with @EmbeddedId section for more info.
24.1.21. @Entity
The @Entity (http://docs.oracle.com/javaee/7/api/javax/persistence/Entity.html) annotation is used to specify that the currently annotate
class represents an entity type. Unlike basic and embeddable types, entity types have an identity and their state is managed by the
underlying Persistence Context.
24.1.22. @EntityListeners
The @EntityListeners (http://docs.oracle.com/javaee/7/api/javax/persistence/EntityListeners.html) annotation is used to specify an array
of callback listener classes that are used by the currently annotated entity.
24.1.23. @EntityResult
The @EntityResult (http://docs.oracle.com/javaee/7/api/javax/persistence/EntityResult.html) annotation is used with the
@SqlResultSetMapping annotation to map the selected columns to an entity.
See the Entity associations with named native queries section for more info.
24.1.24. @Enumerated
The @Enumerated (http://docs.oracle.com/javaee/7/api/javax/persistence/Enumerated.html) annotation is used to specify that an entity
attribute represents an enumerated type.
24.1.25. @ExcludeDefaultListeners
The @ExcludeDefaultListeners (http://docs.oracle.com/javaee/7/api/javax/persistence/ExcludeDefaultListeners.html) annotation is used to
specify that the currently annotated entity skips the invocation of any default listener.
See the Exclude default entity listeners section for more info.
24.1.26. @ExcludeSuperclassListeners
The @ExcludeSuperclassListeners (http://docs.oracle.com/javaee/7/api/javax/persistence/ExcludeSuperclassListeners.html) annotation is
used to specify that the currently annotated entity skips the invocation of listeners declared by its superclass.
See the Exclude default entity listeners section for more info.
24.1.27. @FieldResult
The @FieldResult (http://docs.oracle.com/javaee/7/api/javax/persistence/FieldResult.html) annotation is used with the @EntityResult
annotation to map the selected columns to the fields of some specific entity.
See the Entity associations with named native queries section for more info.
24.1.28. @ForeignKey
The @ForeignKey (http://docs.oracle.com/javaee/7/api/javax/persistence/ForeignKey.html) annotation is used to specify the associated
foreign key of a @JoinColumn mapping. The @ForeignKey annotation is only used if the automated schema generation tool is
enabled, in which case, it allows you to customize the underlying foreign key definition.
24.1.29. @GeneratedValue
The @GeneratedValue (http://docs.oracle.com/javaee/7/api/javax/persistence/GeneratedValue.html) annotation specifies that the entity
identifier value is automatically generated using an identity column, a database sequence, or a table generator. Hibernate
24.1.30. @Id
The @Id (http://docs.oracle.com/javaee/7/api/javax/persistence/Id.html) annotation specifies the entity identifier. An entity must always
have an identifier attribute which is used when loading the entity in a given Persistence Context.
24.1.31. @IdClass
The @IdClass (http://docs.oracle.com/javaee/7/api/javax/persistence/IdClass.html) annotation is used if the current entity defined a
composite identifier. A separate class encapsulates all the identifier attributes, which are mirrored by the current entity mapping.
See the Composite identifiers with @IdClass section for more info.
24.1.32. @Index
The @Index (http://docs.oracle.com/javaee/7/api/javax/persistence/Index.html) annotation is used by the automated schema generation
tool to create a database index.
24.1.33. @Inheritance
The @Inheritance (http://docs.oracle.com/javaee/7/api/javax/persistence/Inheritance.html) annotation is used to specify the inheritance
strategy of a given entity class hierarchy.
24.1.34. @JoinColumn
The @JoinColumn (http://docs.oracle.com/javaee/7/api/javax/persistence/JoinColumn.html) annotation is used to specify the FOREIGN KEY
column used when joining an entity association or an embeddable collection.
24.1.35. @JoinColumns
The @JoinColumns (http://docs.oracle.com/javaee/7/api/javax/persistence/JoinColumns.html) annotation is used to group multiple
@JoinColumn annotations, which are used when mapping entity association or an embeddable collection using a composite
identifier
24.1.36. @JoinTable
The @JoinTable (http://docs.oracle.com/javaee/7/api/javax/persistence/JoinTable.html) annotation is used to specify the link table
between two other database tables.
24.1.37. @Lob
The @Lob (http://docs.oracle.com/javaee/7/api/javax/persistence/Lob.html) annotation is used to specify that the currently annotated
entity attribute represents a large object type.
24.1.38. @ManyToMany
The @ManyToMany (http://docs.oracle.com/javaee/7/api/javax/persistence/ManyToMany.html) annotation is used to specify a many-to-many
database relationship.
24.1.39. @ManyToOne
The @ManyToOne (http://docs.oracle.com/javaee/7/api/javax/persistence/ManyToOne.html) annotation is used to specify a many-to-one
database relationship.
24.1.40. @MapKey
The @MapKey (http://docs.oracle.com/javaee/7/api/javax/persistence/MapKey.html) annotation is used to specify the key of a
java.util.Map association for which the key type is either the primary key or an attribute of the entity which represents the
value of the map.
24.1.41. @MapKeyClass
The @MapKeyClass (http://docs.oracle.com/javaee/7/api/javax/persistence/MapKeyClass.html) annotation is used to specify the type of the
map key of a java.util.Map associations.
24.1.42. @MapKeyColumn
The @MapKeyColumn (http://docs.oracle.com/javaee/7/api/javax/persistence/MapKeyColumn.html) annotation is used to specify the database
column which stores the key of a java.util.Map association for which the map key is a basic type.
See the @MapKeyType mapping section for an example of @MapKeyColumn annotation usage.
24.1.43. @MapKeyEnumerated
The @MapKeyEnumerated (http://docs.oracle.com/javaee/7/api/javax/persistence/MapKeyEnumerated.html) annotation is used to specify that
the key of java.util.Map association is a Java Enum.
24.1.44. @MapKeyJoinColumn
The @MapKeyJoinColumn (http://docs.oracle.com/javaee/7/api/javax/persistence/MapKeyJoinColumn.html) annotation is used to specify that
the key of java.util.Map association is an entity association. The map key column is a FOREIGN KEY in a link table that also
joins the Map owner’s table with the table where the Map value resides.
24.1.45. @MapKeyJoinColumns
The @MapKeyJoinColumns (http://docs.oracle.com/javaee/7/api/javax/persistence/MapKeyJoinColumns.html) annotation is used to group
several @MapKeyJoinColumn mappings when the java.util.Map association key uses a composite identifier.
24.1.46. @MapKeyTemporal
The @MapKeyTemporal (http://docs.oracle.com/javaee/7/api/javax/persistence/MapKeyTemporal.html) annotation is used to specify that the
24.1.47. @MappedSuperclass
The @MappedSuperclass (http://docs.oracle.com/javaee/7/api/javax/persistence/MappedSuperclass.html) annotation is used to specify that
the currently annotated type attributes are inherited by any subclass entity.
24.1.48. @MapsId
The @MapsId (http://docs.oracle.com/javaee/7/api/javax/persistence/MapsId.html) annotation is used to specify that the entity identifier is
mapped by the currently annotated @ManyToOne or @OneToOne association.
24.1.49. @NamedAttributeNode
The @NamedAttributeNode (http://docs.oracle.com/javaee/7/api/javax/persistence/NamedAttributeNode.html) annotation is used to specify
each individual attribute node that needs to be fetched by an Entity Graph.
24.1.50. @NamedEntityGraph
The @NamedEntityGraph (http://docs.oracle.com/javaee/7/api/javax/persistence/NamedEntityGraph.html) annotation is used to specify an
Entity Graph that can be used by an entity query to override the default fetch plan.
24.1.51. @NamedEntityGraphs
The @NamedEntityGraphs (http://docs.oracle.com/javaee/7/api/javax/persistence/NamedEntityGraphs.html) annotation is used to group
multiple @NamedEntityGraph annotations.
24.1.52. @NamedNativeQueries
The @NamedNativeQueries (http://docs.oracle.com/javaee/7/api/javax/persistence/NamedNativeQueries.html) annotation is used to group
multiple @NamedNativeQuery annotations.
24.1.53. @NamedNativeQuery
The @NamedNativeQuery (http://docs.oracle.com/javaee/7/api/javax/persistence/NamedNativeQuery.html) annotation is used to specify a
native SQL query that can be retrieved later by its name.
24.1.54. @NamedQueries
The @NamedQueries (http://docs.oracle.com/javaee/7/api/javax/persistence/NamedQueries.html) annotation is used to group multiple
@NamedQuery annotations.
24.1.55. @NamedQuery
24.1.56. @NamedStoredProcedureQueries
The @NamedStoredProcedureQueries (http://docs.oracle.com/javaee/7/api/javax/persistence/NamedStoredProcedureQueries.html)
annotation is used to group multiple @NamedStoredProcedureQuery annotations.
24.1.57. @NamedStoredProcedureQuery
The @NamedStoredProcedureQuery (http://docs.oracle.com/javaee/7/api/javax/persistence/NamedStoredProcedureQuery.html) annotation is
used to specify a stored procedure query that can be retrieved later by its name.
See the Using named queries to call stored procedures section for more info.
24.1.58. @NamedSubgraph
The @NamedSubgraph (http://docs.oracle.com/javaee/7/api/javax/persistence/NamedSubgraph.html) annotation used to specify a subgraph
in an Entity Graph.
24.1.59. @OneToMany
The @OneToMany (http://docs.oracle.com/javaee/7/api/javax/persistence/OneToMany.html) annotation is used to specify a one-to-many
database relationship.
24.1.60. @OneToOne
The @OneToOne (http://docs.oracle.com/javaee/7/api/javax/persistence/OneToOne.html) annotation is used to specify a one-to-one database
relationship.
24.1.61. @OrderBy
The @OrderBy (http://docs.oracle.com/javaee/7/api/javax/persistence/OrderBy.html) annotation is used to specify the entity attributes used
for sorting when fetching the currently annotated collection.
24.1.62. @OrderColumn
The @OrderColumn (http://docs.oracle.com/javaee/7/api/javax/persistence/OrderColumn.html) annotation is used to specify that the current
annotation collection order should be materialized in the database.
24.1.63. @PersistenceContext
The @PersistenceContext (http://docs.oracle.com/javaee/7/api/javax/persistence/PersistenceContext.html) annotation is used to specify
the EntityManager that needs to be injected as a dependency.
24.1.64. @PersistenceContexts
The @PersistenceContexts (http://docs.oracle.com/javaee/7/api/javax/persistence/PersistenceContexts.html) annotation is used to group
multiple @PersistenceContext annotations.
24.1.65. @PersistenceProperty
The @PersistenceProperty (http://docs.oracle.com/javaee/7/api/javax/persistence/PersistenceProperty.html) annotation is used by the
@PersistenceContext annotation to declare JPA provider properties that are passed to the underlying container when the
EntityManager instance is created.
24.1.66. @PersistenceUnit
The @PersistenceUnit (http://docs.oracle.com/javaee/7/api/javax/persistence/PersistenceUnit.html) annotation is used to specify the
EntityManagerFactory that needs to be injected as a dependency.
24.1.67. @PersistenceUnits
The @PersistenceUnits (http://docs.oracle.com/javaee/7/api/javax/persistence/PersistenceUnits.html) annotation is used to group multiple
@PersistenceUnit annotations.
24.1.68. @PostLoad
The @PostLoad (http://docs.oracle.com/javaee/7/api/javax/persistence/PostLoad.html) annotation is used to specify a callback method that
fires after an entity is loaded.
24.1.69. @PostPersist
The @PostPersist (http://docs.oracle.com/javaee/7/api/javax/persistence/PostPersist.html) annotation is used to specify a callback method
that fires after an entity is persisted.
24.1.70. @PostRemove
The @PostRemove (http://docs.oracle.com/javaee/7/api/javax/persistence/PostRemove.html) annotation is used to specify a callback method
that fires after an entity is removed.
24.1.71. @PostUpdate
The @PostUpdate (http://docs.oracle.com/javaee/7/api/javax/persistence/PostUpdate.html) annotation is used to specify a callback method
that fires after an entity is updated.
24.1.72. @PrePersist
The @PrePersist (http://docs.oracle.com/javaee/7/api/javax/persistence/PrePersist.html) annotation is used to specify a callback method
that fires before an entity is persisted.
24.1.73. @PreRemove
The @PreRemove (http://docs.oracle.com/javaee/7/api/javax/persistence/PreRemove.html) annotation is used to specify a callback method
that fires before an entity is removed.
24.1.74. @PreUpdate
The @PreUpdate (http://docs.oracle.com/javaee/7/api/javax/persistence/PreUpdate.html) annotation is used to specify a callback method
that fires before an entity is updated.
24.1.75. @PrimaryKeyJoinColumn
The @PrimaryKeyJoinColumn (http://docs.oracle.com/javaee/7/api/javax/persistence/PrimaryKeyJoinColumn.html) annotation is used to
specify that the primary key column of the currently annotated entity is also a foreign key to some other entity (e.g. a base class
table in a JOINED inheritance strategy, the primary table in a secondary table mapping, or the parent table in a @OneToOne
relationship).
24.1.76. @PrimaryKeyJoinColumns
The @PrimaryKeyJoinColumns (http://docs.oracle.com/javaee/7/api/javax/persistence/PrimaryKeyJoinColumns.html) annotation is used to
group multiple @PrimaryKeyJoinColumn annotations.
24.1.77. @QueryHint
The @QueryHint (http://docs.oracle.com/javaee/7/api/javax/persistence/QueryHint.html) annotation is used to specify a JPA provider hint
used by a @NamedQuery or a @NamedNativeQuery annotation.
24.1.78. @SecondaryTable
The @SecondaryTable (http://docs.oracle.com/javaee/7/api/javax/persistence/SecondaryTable.html) annotation is used to specify a
secondary table for the currently annotated entity.
24.1.79. @SecondaryTables
The @SecondaryTables (http://docs.oracle.com/javaee/7/api/javax/persistence/SecondaryTables.html) annotation is used to group multiple
@SecondaryTable annotations.
24.1.80. @SequenceGenerator
The @SequenceGenerator (http://docs.oracle.com/javaee/7/api/javax/persistence/SequenceGenerator.html) annotation is used to specify the
database sequence used by the identifier generator of the currently annotated entity.
24.1.81. @SqlResultSetMapping
The @SqlResultSetMapping (http://docs.oracle.com/javaee/7/api/javax/persistence/SqlResultSetMapping.html) annotation is used to specify
the ResultSet mapping of a native SQL query or stored procedure.
24.1.82. @SqlResultSetMappings
The @SqlResultSetMappings (http://docs.oracle.com/javaee/7/api/javax/persistence/SqlResultSetMappings.html) annotation is group
multiple @SqlResultSetMapping annotations.
24.1.83. @StoredProcedureParameter
The @StoredProcedureParameter (http://docs.oracle.com/javaee/7/api/javax/persistence/StoredProcedureParameter.html) annotation is
used to specify a parameter of a @NamedStoredProcedureQuery .
See the Using named queries to call stored procedures section for more info.
24.1.84. @Table
The @Table (http://docs.oracle.com/javaee/7/api/javax/persistence/Table.html) annotation is used to specify the primary table of the
currently annotated entity.
24.1.85. @TableGenerator
The @TableGenerator (http://docs.oracle.com/javaee/7/api/javax/persistence/TableGenerator.html) annotation is used to specify the
database table used by the identity generator of the currently annotated entity.
24.1.86. @Temporal
The @Temporal (http://docs.oracle.com/javaee/7/api/javax/persistence/Temporal.html) annotation is used to specify the TemporalType of
the currently annotated java.util.Date or java.util.Calendar entity attribute.
24.1.87. @Transient
The @Transient (http://docs.oracle.com/javaee/7/api/javax/persistence/Transient.html) annotation is used to specify that a given entity
attribute should not be persisted.
24.1.88. @UniqueConstraint
The @UniqueConstraint (http://docs.oracle.com/javaee/7/api/javax/persistence/UniqueConstraint.html) annotation is used to specify a
unique constraint to be included by the automated schema generator for the primary or secondary table associated with the
currently annotated entity.
24.1.89. @Version
The @Version (http://docs.oracle.com/javaee/7/api/javax/persistence/Version.html) annotation is used to specify the version attribute used
for optimistic locking.
24.2.1. @AccessType
The @AccessType (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/AccessType.html) annotation is deprecated.
You should use either the JPA @Access or the Hibernate native @AttributeAccessor annotation.
24.2.2. @Any
The @Any (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/Any.html) annotation is used to define the any-to-
one association which can point to one one of several entity types.
24.2.3. @AnyMetaDef
The @AnyMetaDef (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/AnyMetaDef.html) annotation is used to
provide metadata about an @Any or @ManyToAny mapping.
24.2.4. @AnyMetaDefs
The @AnyMetaDefs (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/AnyMetaDefs.html) annotation is used to
group multiple @AnyMetaDef annotations.
24.2.5. @AttributeAccessor
The @AttributeAccessor (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/AttributeAccessor.html) annotation is
used to specify a custom PropertyAccessStrategy
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/property/access/spi/PropertyAccessStrategy.html).
Should only be used to name a custom PropertyAccessStrategy . For property/field access type, the JPA @Access annotation
should be preferred.
However, if this annotation is used with either value="property" or value="field", it will act just as the corresponding usage of the
JPA @Access annotation.
24.2.6. @BatchSize
The @BatchSize (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/BatchSize.html) annotation is used to specify
the size for batch loading the entries of a lazy collection.
24.2.7. @Cache
The @Cache (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/Cache.html) annotation is used to specify the
CacheConcurrencyStrategy (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/CacheConcurrencyStrategy.html) of
a root entity or a collection.
24.2.8. @Cascade
The @Cascade (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/Cascade.html) annotation is used to apply the
Hibernate specific CascadeType (http://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/CascadeType.html) strategies
(e.g. CascadeType.LOCK , CascadeType.SAVE_UPDATE , CascadeType.REPLICATE ) on a given association.
(http://docs.oracle.com/javaee/7/api/javax/persistence/CascadeType.html) instead.
When combining both JPA and Hibernate CascadeType strategies, Hibernate will merge both sets of cascades.
24.2.9. @Check
The @Check (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/Check.html) annotation is used to specify an
arbitrary SQL CHECK constraint which can be defined at the class level.
24.2.10. @CollectionId
The @CollectionId (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/CollectionId.html) annotation is used to
specify an identifier column for an idbag collection.
24.2.11. @CollectionType
The @CollectionType (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/CollectionType.html) annotation is used
to specify a custom collection type.
The collection can also name a @Type , which defines the Hibernate Type of the collection elements.
24.2.12. @ColumnDefault
The @ColumnDefault (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/ColumnDefault.html) annotation is used to
specify the DEFAULT DDL value to apply when using the automated schema generator.
The same behavior can be achieved using the definition attribute of the JPA @Column annotation.
See the Default value for a database column chapter for more info.
24.2.13. @Columns
The @Columns (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/Columns.html) annotation is used to group
multiple JPA @Column annotations.
24.2.14. @ColumnTransformer
The @ColumnTransformer (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/ColumnTransformer.html) annotation
is used to customize how a given column value is read from or write into the database.
24.2.15. @ColumnTransformers
The @ColumnTransformers (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/ColumnTransformers.html)
annotation iis used to group multiple @ColumnTransformer annotations.
24.2.16. @CreationTimestamp
24.2.17. @DiscriminatorFormula
The @DiscriminatorFormula (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/DiscriminatorFormula.html)
annotation is used to specify a Hibernate @Formula to resolve the inheritance discriminator value.
24.2.18. @DiscriminatorOptions
The @DiscriminatorOptions (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/DiscriminatorOptions.html)
annotation is used to provide the force and insert Discriminator properties.
24.2.19. @DynamicInsert
The @DynamicInsert (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/DynamicInsert.html) annotation is used to
specify that the INSERT SQL statement should be generated whenever an entity is to be persisted.
By default, Hibernate uses a cached INSERT statement that sets all table columns. When the entity is annotated with the
@DynamicInsert annotation, the PreparedStatement is going to include only the non-null columns.
See the @CreationTimestamp mapping section for more info on how @DynamicInsert works.
24.2.20. @DynamicUpdate
The @DynamicUpdate (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/DynamicUpdate.html) annotation is used
to specify that the UPDATE SQL statement should be generated whenever an entity is modified.
By default, Hibernate uses a cached UPDATE statement that sets all table columns. When the entity is annotated with the
@DynamicUpdate annotation, the PreparedStatement is going to include only the columns whose values have been changed.
For reattachment of detached entities, the dynamic update is not possible without having the
@SelectBeforeUpdate annotation as well.
24.2.21. @Entity
The @Entity (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/Entity.html) annotation is deprecated. Use the JPA
@Entity annotation instead.
24.2.22. @Fetch
The @Fetch (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/Fetch.html) annotation is used to specify the
Hibernate specific FetchMode (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/FetchMode.html) (e.g. JOIN ,
SELECT , SUBSELECT ) used for the currently annotated association:
24.2.23. @FetchProfile
The @FetchProfile (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/FetchProfile.html) annotation is used to
specify a custom fetching profile, similar to a JPA Entity Graph.
24.2.24. @FetchProfile.FetchOverride
The @FetchProfile.FetchOverride
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/FetchProfile.FetchOverride.html) annotation is used in
conjunction with the @FetchProfile annotation, and it’s used for overriding the fetching strategy of a particular entity
association.
24.2.25. @FetchProfiles
The @FetchProfiles (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/FetchProfiles.html) annotation is used to
group multiple @FetchProfile annotations.
24.2.26. @Filter
The @Filter (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/Filter.html) annotation is used to add filters to an
entity or the target entity of a collection.
24.2.27. @FilterDef
The @FilterDef (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/FilterDef.html) annotation is used to specify a
@Filter definition (name, default condition and parameter types, if any).
24.2.28. @FilterDefs
The @FilterDefs (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/FilterDefs.html) annotation is used to group
multiple @FilterDef annotations.
24.2.29. @FilterJoinTable
The @FilterJoinTable (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/FilterJoinTable.html) annotation is used
to add @Filter capabilities to a join table collection.
24.2.30. @FilterJoinTables
The @FilterJoinTables (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/FilterJoinTables.html) annotation is
used to group multiple @FilterJoinTable annotations.
24.2.31. @Filters
The @Filters (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/Filters.html) annotation is used to group
multiple @Filter annotations.
24.2.32. @ForeignKey
The @ForeignKey (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/ForeignKey.html) annotation is deprecated.
Use the JPA 2.1 @ForeignKey annotation instead.
24.2.33. @Formula
The @Formula (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/Formula.html) annotation is used to specify an
SQL fragment that is executed in order to populate a given entity attribute.
24.2.34. @Generated
The @Generated (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/Generated.html) annotation is used to specify
that the currently annotated entity attribute is generated by the database.
24.2.35. @GeneratorType
The @GeneratorType (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/GeneratorType.html) annotation is used to
provide a ValueGenerator (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/tuple/ValueGenerator.html) and a
GenerationTime (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/GenerationTime.html) for the currently
annotated generated attribute.
24.2.36. @GenericGenerator
The @GenericGenerator (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/GenericGenerator.html) annotation
can be used to configure any Hibernate identifier generator.
24.2.37. @GenericGenerators
The @GenericGenerators (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/GenericGenerators.html) annotation
is used to group multiple @GenericGenerator annotations.
24.2.38. @Immutable
The @Immutable (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/Immutable.html) annotation is used to specify
that the annotated entity, attribute, or collection is immutable.
24.2.39. @Index
The @Index (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/Index.html) annotation is deprecated. Use the JPA
@Index annotation instead.
24.2.40. @IndexColumn
The @IndexColumn (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/IndexColumn.html) annotation is
deprecated. Use the JPA @OrderColumn annotation instead.
24.2.41. @JoinColumnOrFormula
The @JoinColumnOrFormula (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/JoinColumnOrFormula.html)
annotation is used to specify that the entity association is resolved either through a FOREIGN KEY join (e.g. @JoinColumn ) or
using the result of a given SQL formula (e.g. @JoinFormula ).
24.2.42. @JoinColumnsOrFormulas
The @JoinColumnsOrFormulas (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/JoinColumnsOrFormulas.html)
annotation is used to group multiple @JoinColumnOrFormula annotations.
24.2.43. @JoinFormula
The @JoinFormula (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/JoinFormula.html) annotation is used as a
replacement for @JoinColumn when the association does not have a dedicated FOREIGN KEY column.
24.2.44. @LazyCollection
The @LazyCollection (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/LazyCollection.html) annotation is used
to specify the lazy fetching behavior of a given collection. The possible values are given by the LazyCollectionOption
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/LazyCollectionOption.html) enumeration:
TRUE
FALSE
EXTRA
The TRUE and FALSE values are deprecated since you should be using the JPA FetchType
(http://docs.oracle.com/javaee/7/api/javax/persistence/FetchType.html) attribute of the @ElementCollection , @OneToMany , or
@ManyToMany collection.
The EXTRA value has no equivalent in the JPA specification, and it’s used to avoid loading the entire collection even when the
collection is accessed for the first time. Each element is fetched individually using a secondary query.
24.2.45. @LazyGroup
The @LazyGroup (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/LazyGroup.html) annotation is used to specify
that an entity attribute should be fetched along with all the other attributes belonging to the same group.
To load entity attributes lazily, bytecode enhancement is needed. By default, all non-collection attributes are loaded in one group
named "DEFAULT".
This annotation allows defining different groups of attributes to be initialized together when access one attribute in the group.
24.2.46. @LazyToOne
FALSE
Eagerly load the association. This one is not needed since the JPA FetchType.EAGER offers the same behavior.
NO_PROXY
This option will fetch the association lazily while returning real entity object.
PROXY
This option will fetch the association lazily while returning a proxy instead.
24.2.47. @ListIndexBase
The @ListIndexBase (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/ListIndexBase.html) annotation is used to
specify the start value for a list index, as stored in the database.
By default, List indexes are stored starting at zero. Generally used in conjunction with @OrderColumn .
24.2.48. @Loader
The @Loader (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/Loader.html) annotation is used to override the
default SELECT query used for loading an entity loading.
24.2.49. @ManyToAny
The @ManyToAny (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/ManyToAny.html) annotation is used to specify
a many-to-one association when the target type is dynamically resolved.
24.2.50. @MapKeyType
The @MapKeyType (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/MapKeyType.html) annotation is used to
specify the map key type.
24.2.51. @MetaValue
The @MetaValue (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/MetaValue.html) annotation is used by the
@AnyMetaDef annotation to specify the association between a given discriminator value and an entity type.
24.2.52. @NamedNativeQueries
The @NamedNativeQueries (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/NamedNativeQueries.html)
annotation is used to group multiple @NamedNativeQuery annotations.
24.2.53. @NamedNativeQuery
The @NamedNativeQuery (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/NamedNativeQuery.html) annotation
extends the JPA @NamedNativeQuery with Hibernate specific features, like:
if the query should be cached, and which cache region should be used
if the query is read-only, hence it does not store the resulted entities into the currently running Persistence Context
24.2.54. @NamedQueries
The @NamedQueries (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/NamedQueries.html) annotation is used to
group multiple @NamedQuery annotations.
24.2.55. @NamedQuery
The @NamedQuery (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/NamedQuery.html) annotation extends the
JPA @NamedQuery with Hibernate specific features, like:
if the query should be cached, and which cache region should be used
if the query is read-only, hence it does not store the resulted entities into the currently running Persistence Context
24.2.56. @Nationalized
The @Nationalized (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/Nationalized.html) annotation is used to
specify that the currently annotated attribute is a character type (e.g. String , Character , Clob ) that is stored in a nationalized
column type ( NVARCHAR , NCHAR , NCLOB ).
24.2.57. @NaturalId
The @NaturalId (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/NaturalId.html) annotation is used to specify
that the currently annotated attribute is part of the natural id of the entity.
24.2.58. @NaturalIdCache
The @NaturalIdCache (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/NaturalIdCache.html) annotation is used
to specify that the natural id values associated with the annotated entity should be stored in the second-level cache.
24.2.59. @NotFound
The @NotFound (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/NotFound.html) annotation is used to specify
the NotFoundAction (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/NotFoundAction.html) strategy for when
an element is not found in a given association.
EXCEPTION
IGNORE
24.2.60. @OnDelete
The @OnDelete (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/OnDelete.html) annotation is used to specify the
delete strategy employed by the currently annotated collection, array or joined subclasses. This annotation is used by the
automated schema generation tool to generated the appropriate FOREIGN KEY DDL cascade directive.
CASCADE
Use the database FOREIGN KEY cascade capabilities.
NO_ACTION
Take no action.
24.2.61. @OptimisticLock
The @OptimisticLock (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/OptimisticLock.html) annotation is used
to specify if the currently annotated attribute will trigger an entity version increment upon being modified.
24.2.62. @OptimisticLocking
The @OptimisticLocking (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/OptimisticLocking.html) annotation is
used to specify the currently annotated an entity optimistic locking strategy.
NONE
The implicit optimistic locking mechanism is disabled.
VERSION
The implicit optimistic locking mechanism is using a dedicated version column.
ALL
The implicit optimistic locking mechanism is using all attributes as part of an expanded WHERE clause restriction for the
UPDATE and DELETE SQL statements.
DIRTY
The implicit optimistic locking mechanism is using the dirty attributes (the attributes that were modified) as part of an
expanded WHERE clause restriction for the UPDATE and DELETE SQL statements.
24.2.63. @OrderBy
The @OrderBy (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/OrderBy.html) annotation is used to specify a
SQL ordering directive for sorting the currently annotated collection.
It differs from the JPA @OrderBy annotation because the JPA annotation expects a JPQL order-by fragment, not an SQL directive.
24.2.64. @ParamDef
The @ParamDef (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/ParamDef.html) annotation is used in
conjunction with @FilterDef so that the Hibernate Filter can be customized with runtime-provided parameter values.
24.2.65. @Parameter
The @Parameter (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/Parameter.html) annotation is a generic
parameter (basically a key/value combination) used to parametrize other annotations, like @CollectionType ,
@GenericGenerator , and @Type , @TypeDef .
24.2.66. @Parent
The @Parent (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/Parent.html) annotation is used to specify that the
currently annotated embeddable attribute references back the owning entity.
24.2.67. @Persister
24.2.68. @Polymorphism
The @Polymorphism (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/Polymorphism.html) annotation is used to
define the PolymorphismType (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/PolymorphismType.html)
Hibernate will apply to entity hierarchies.
EXPLICIT
The currently annotated entity is retrieved only if explicitly asked.
IMPLICIT
The currently annotated entity is retrieved if any of its super entity are retrieved. This is the default option.
24.2.69. @Proxy
The @Proxy (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/Proxy.html) annotation is used to specify a custom
proxy implementation for the currently annotated entity.
24.2.70. @RowId
The @RowId (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/RowId.html) annotation is used to specify the
database column used as a ROWID pseudocolumn. For instance, Oracle defines the ROWID pseudocolumn
(https://docs.oracle.com/cd/B19306_01/server.102/b14200/pseudocolumns008.htm) which provides the address of every table row.
According to Oracle documentation, ROWID is the fastest way to access a single row from a table.
24.2.71. @SelectBeforeUpdate
The @SelectBeforeUpdate (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/SelectBeforeUpdate.html) annotation
is used to specify that the currently annotated entity state be selected from the database when determining whether to perform
an update when the detached entity is reattached.
See the OptimisticLockType.DIRTY mapping section for more info on how @SelectBeforeUpdate works.
24.2.72. @Sort
The @Sort (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/Sort.html) annotation is deprecated. Use the
Hibernate specific @SortComparator or @SortNatural annotations instead.
24.2.73. @SortComparator
The @SortComparator (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/SortComparator.html) annotation is used
to specify a Comparator for sorting the Set / Map in-memory.
24.2.74. @SortNatural
The @SortNatural (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/SortNatural.html) annotation is used to
specify that the Set / Map should be sorted using natural sorting.
24.2.75. @Source
The @Source (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/Source.html) annotation is used in conjunction
with a @Version timestamp entity attribute indicating the SourceType
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/SourceType.html) of the timestamp value.
DB
Get the timestamp from the database.
VM
Get the timestamp from the current JVM.
See the Database-generated version timestamp mapping section for more info.
24.2.76. @SQLDelete
The @SQLDelete (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/SQLDelete.html) annotation is used to specify a
custom SQL DELETE statement for the currently annotated entity or collection.
24.2.77. @SQLDeleteAll
The @SQLDeleteAll (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/SQLDeleteAll.html) annotation is used to
specify a custom SQL DELETE statement when removing all elements of the currently annotated collection.
24.2.78. @SqlFragmentAlias
The @SqlFragmentAlias (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/SqlFragmentAlias.html) annotation is
used to specify an alias for a Hibernate @Filter .
The alias (e.g. myAlias ) can then be used in the @Filter condition clause using the {alias} (e.g. {myAlias} ) placeholder.
24.2.79. @SQLInsert
The @SQLInsert (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/SQLInsert.html) annotation is used to specify a
custom SQL INSERT statement for the currently annotated entity or collection.
24.2.80. @SQLUpdate
The @SQLUpdate (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/SQLUpdate.html) annotation is used to specify
a custom SQL UPDATE statement for the currently annotated entity or collection.
24.2.81. @Subselect
The @Subselect (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/Subselect.html) annotation is used to specify
an immutable and read-only entity using a custom SQL SELECT statement.
See the Mapping the entity to a SQL query section for more info.
24.2.82. @Synchronize
The @Synchronize (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/Synchronize.html) annotation is usually
used in conjunction with the @Subselect annotation to specify the list of database tables used by the @Subselect SQL query.
With this information in place, Hibernate will properly trigger an entity flush whenever a query targeting the @Subselect entity
is to be executed while the Persistence Context has scheduled some insert/update/delete actions against the database tables used
by the @Subselect SQL query.
Therefore, the @Synchronize annotation prevents the derived entity from returning stale data when executing entity queries
against the @Subselect entity.
See the Mapping the entity to a SQL query section for more info.
24.2.83. @Table
The @Table (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/Table.html) annotation is used to specify additional
information to a JPA @Table annotation, like custom INSERT , UPDATE or DELETE statements or a specific FetchMode
(https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/FetchMode.html).
See the @SecondaryTable mapping section for more info about Hibernate-specific @Table mapping.
24.2.84. @Tables
The @Tables (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/Tables.html) annotation is used to group multiple
@Table annotations.
24.2.85. @Target
The @Target (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/Target.html) annotation is used to specify an
explicit target implementation when the currently annotated association is using an interface type.
24.2.86. @Tuplizer
The @Tuplizer (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/Tuplizer.html) annotation is used to specify a
custom tuplizer for the currently annotated entity or embeddable.
24.2.87. @Tuplizers
The @Tuplizers (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/Tuplizers.html) annotation is used to group
multiple @Tuplizer annotations.
24.2.88. @Type
The @Type (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/Type.html) annotation is used to specify the
Hibernate @Type (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/type/Type.html) used by the currently annotated basic
attribute.
24.2.89. @TypeDef
The @TypeDef (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/TypeDef.html) annotation is used to specify a
@Type (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/type/Type.html) definition which can later be reused for multiple
basic attribute mappings.
24.2.90. @TypeDefs
The @TypeDefs (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/TypeDefs.html) annotation is used to group
multiple @TypeDef annotations.
24.2.91. @UpdateTimestamp
The @UpdateTimestamp (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/UpdateTimestamp.html) annotation is
used to specify that the currently annotated timestamp attribute should be updated with the current JVM timestamp whenever
the owning entity gets modified.
java.util.Date
java.util.Calendar
java.sql.Date
java.sql.Time
java.sql.Timestamp
24.2.92. @ValueGenerationType
The @ValueGenerationType (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/ValueGenerationType.html)
annotation is used to specify that the current annotation type should be used as a generator annotation type.
24.2.93. @Where
The @Where (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/Where.html) annotation is used to specify a custom
24.2.94. @WhereJoinTable
The @WhereJoinTable (https://docs.jboss.org/hibernate/orm/5.3/javadocs/org/hibernate/annotations/WhereJoinTable.html) annotation is used
to specify a custom SQL WHERE clause used when fetching a join collection table.
Every enterprise system is unique. However, having a very efficient data access layer is a common requirement for many
enterprise applications. Hibernate comes with a great variety of features that can help you tune the data access layer.
An automated schema migration tool (e.g. Flyway (https://flywaydb.org/), Liquibase (http://www.liquibase.org/)) allows you to use any
database-specific DDL feature (e.g. Rules, Triggers, Partitioned Tables). Every migration should have an associated script, which is
stored on the Version Control System, along with the application source code.
When the application is deployed on a production-like QA environment, and the deploy worked as expected, then pushing the
deploy to a production environment should be straightforward since the latest schema migration was already tested.
You should always use an automatic schema migration tool and have all the migration scripts stored in
the Version Control System.
25.2. Logging
Whenever you’re using a framework that generates SQL statements on your behalf, you have to ensure that the generated
statements are the ones that you intended in the first place.
There are several alternatives to logging statements. You can log statements by configuring the underlying logging framework.
For Log4j, you can use the following appenders:
JAVA
### log just the SQL
log4j.logger.org.hibernate.SQL=debug
However, there are some other alternatives like using datasource-proxy or p6spy. The advantage of using a JDBC Driver or
DataSource proxy is that you can go beyond simple SQL logging:
Another advantage of using a DataSource proxy is that you can assert the number of executed statements at test time. This way,
you can have the integration tests fail when a N+1 query issue is automatically detected.
Not only INSERT and UPDATE statements, but even DELETE statements can be batched as well. For INSERT and UPDATE
statements, make sure that you have all the right configuration properties in place, like ordering inserts and updates and
activating batching for versioned data. Check out this article for more details on this topic.
For DELETE statements, there is no option to order parent and child statements, so cascading can interfere with the JDBC
batching process.
Unlike any other framework which doesn’t automate SQL statement generation, Hibernate makes it very easy to activate JDBC-
level batching as indicated in the Batching chapter.
25.4. Mapping
Choosing the right mappings is very important for a high-performance data access layer. From the identifier generators to
associations, there are many options to choose from, yet not all choices are equal from a performance perspective.
25.4.1. Identifiers
When it comes to identifiers, you can either choose a natural id or a synthetic key.
For natural identifiers, the assigned identifier generator is the right choice.
For synthetic keys, the application developer can either choose a randomly generates fixed-size sequence (e.g. UUID) or a natural
identifier. Natural identifiers are very practical, being more compact than their UUID counterparts, so there are multiple
generators to choose from:
IDENTITY
SEQUENCE
TABLE
Although the TABLE generator addresses the portability concern, in reality, it performs poorly because it requires emulating a
database sequence using a separate transaction and row-level locks. For this reason, the choice is usually between IDENTITY and
SEQUENCE .
If the underlying database supports sequences, you should always use them for your Hibernate entity
identifiers.
Only if the relational database does not support sequences (e.g. MySQL 5.7), you should use the
IDENTITY generators. However, you should keep in mind that the IDENTITY generators disables JDBC batching
for INSERT statements.
If you’re using the SEQUENCE generator, then you should be using the enhanced identifier generators that were enabled by
default in Hibernate 5. The pooled and the pooled-lo optimizers are very useful to reduce the number of database roundtrips
when writing multiple entities per database transaction.
25.4.2. Associations
JPA offers four entity association types:
@ManyToOne
@OneToOne
@OneToMany
@ManyToMany
Because object associations can be bidirectional, there are many possible combinations of associations. However, not every
possible association type is efficient from a database perspective.
The closer the association mapping is to the underlying database relationship, the better it will
perform.
On the other hand, the more exotic the association mapping, the better the chance of being inefficient.
Therefore, the @ManyToOne and the @OneToOne child-side association are best to represent a FOREIGN KEY relationship.
The parent-side @OneToOne association requires bytecode enhancement so that the association can be loaded lazily. Otherwise,
the parent-side is always fetched even if the association is marked with FetchType.LAZY .
For this reason, it’s best to map @OneToOne association using @MapsId so that the PRIMARY KEY is shared between the child and
the parent entities. When using @MapsId , the parent-side becomes redundant since the child-entity can be easily fetched using
the parent entity identifier.
unidirectional
bidirectional
For unidirectional collections, Sets are the best choice because they generate the most efficient SQL statements. Unidirectional
Lists are less efficient than a @ManyToOne association.
Bidirectional associations are usually a better choice because the @ManyToOne side controls the association.
Embeddable collections ( `@ElementCollection ) are unidirectional associations, hence Sets are the most efficient, followed by
ordered Lists , whereas bags (unordered Lists ) are the least efficient.
The @ManyToMany annotation is rarely a good choice because it treats both sides as unidirectional associations.
For this reason, it’s much better to map the link table as depicted in the Bidirectional many-to-many with link entity lifecycle
section. Each FOREIGN KEY column will be mapped as a @ManyToOne association. On each parent-side, a bidirectional
@OneToMany association is going to map to the aforementioned @ManyToOne relationship in the link entity.
Just because you have support for collections, it does not mean that you have to turn any one-to-
many database relationship into a collection.
Sometimes, a @ManyToOne association is sufficient, and the collection can be simply replaced by an
entity query which is easier to paginate or filter.
25.5. Inheritance
JPA offers SINGLE_TABLE , JOINED , and TABLE_PER_CLASS to deal with inheritance mapping, and each of these strategies has
advantages and disadvantages.
SINGLE_TABLE performs the best in terms of executed SQL statements. However, you cannot use NOT NULL constraints on the
column-level. You can still use triggers and rules to enforce such constraints, but it’s not as straightforward.
JOINED addresses the data integrity concerns because every subclass is associated with a different table. Polymorphic queries
or @OneToMany base class associations don’t perform very well with this strategy. However, polymorphic @ManyToOne
associations are fine, and they can provide a lot of value.
TABLE_PER_CLASS should be avoided since it does not render efficient SQL statements.
25.6. Fetching
Fetching too much data is the number one performance issue for the vast majority of JPA applications.
Hibernate supports both entity queries (JPQL/HQL and Criteria API) and native SQL statements. Entity queries are useful only if
you need to modify the fetched entities, therefore benefiting from the automatic dirty checking mechanism.
For read-only transactions, you should fetch DTO projections because they allow you to select just as many columns as you need
to fulfill a certain business use case. This has many benefits like reducing the load on the currently running Persistence Context
because DTO projections don’t need to be managed.
EAGER
LAZY
Prior to JPA, Hibernate used to have all associations as LAZY by default. However, when JPA 1.0
specification emerged, it was thought that not all providers would use Proxies. Hence, the @ManyToOne
and the @OneToOne associations are now EAGER by default.
The EAGER fetching strategy cannot be overwritten on a per query basis, so the association is always going to
be retrieved even if you don’t need it. More, if you forget to JOIN FETCH an EAGER association in a JPQL query,
Hibernate will initialize it with a secondary statement, which in turn can lead to N+1 query issues.
So, EAGER fetching is to be avoided. For this reason, it’s better if all associations are marked as LAZY by default.
However, LAZY associations must be initialized prior to being accessed. Otherwise, a LazyInitializationException is thrown.
There are good and bad ways to treat the LazyInitializationException .
The best way to deal with LazyInitializationException is to fetch all the required associations prior to closing the Persistence
Context. The JOIN FETCH directive is good for @ManyToOne and OneToOne associations, and for at most one collection (e.g.
@OneToMany or @ManyToMany ). If you need to fetch multiple collections, to avoid a Cartesian Product, you should use secondary
queries which are triggered either by navigating the LAZY association or by calling Hibernate#initialize(proxy) method.
25.7. Caching
Hibernate has two caching layers:
the first-level cache (Persistence Context) which provides application-level repeatable reads.
the second-level cache which, unlike application-level caches, it doesn’t store entity aggregates but normalized dehydrated
entity entries.
The first-level cache is not a caching solution "per se", being more useful for ensuring READ COMMITTED isolation level.
While the first-level cache is short-lived, being cleared when the underlying EntityManager is closed, the second-level cache is
tied to an EntityManagerFactory . Some second-level caching providers offer support for clusters. Therefore, a node needs only
Although the second-level cache can reduce transaction response time since entities are retrieved from the cache rather than
from the database, there are other options to achieve the same goal, and you should consider these alternatives prior to jumping
to a second-level cache layer:
tuning the underlying database cache so that the working set fits into memory, therefore reducing Disk I/O traffic.
optimizing database statements through JDBC batching, statement caching, indexing can reduce the average response time,
therefore increasing throughput as well.
database replication is also a very valuable option to increase read-only transaction throughput
After properly tuning the database, to further reduce the average response time and increase the system throughput, application-
level caching becomes inevitable.
Typically, a key-value application-level cache like Memcached (https://memcached.org/) or Redis (http://redis.io/) is a common choice to
store data aggregates. If you can duplicate all data in the key-value store, you have the option of taking down the database system
for maintenance without completely losing availability since read-only traffic can still be served from the cache.
One of the main challenges of using an application-level cache is ensuring data consistency across entity aggregates. That’s where
the second-level cache comes to the rescue. Being tightly integrated with Hibernate, the second-level cache can provide better
data consistency since entries are cached in a normalized fashion, just like in a relational database. Changing a parent entity only
requires a single entry cache update, as opposed to cache entry invalidation cascading in key-value stores.
READ_ONLY
NONSTRICT_READ_WRITE
READ_WRITE
TRANSACTIONAL
READ_WRITE is a very good default concurrency strategy since it provides strong consistency guarantees without compromising
throughput. The TRANSACTIONAL concurrency strategy uses JTA. Hence, it’s more suitable when entities are frequently modified.
Both READ_WRITE and TRANSACTIONAL use write-through caching, while NONSTRICT_READ_WRITE is a read-through caching
strategy. For this reason, NONSTRICT_READ_WRITE is not very suitable if entities are changed frequently.
When using clustering, the second-level cache entries are spread across multiple nodes. When using Infinispan distributed cache
(http://blog.infinispan.org/2015/10/hibernate-second-level-cache.html), only READ_WRITE and NONSTRICT_READ_WRITE are available for
read-write caches. Bear in mind that NONSTRICT_READ_WRITE offers a weaker consistency guarantee since stale updates are
possible.
For more about Hibernate Performance Tuning, check out the High-Performance Hibernate
(https://www.youtube.com/watch?v=BTdTEe9QL5k&t=1s) presentation from Devoxx France.
The legacy way to bootstrap a SessionFactory is via the org.hibernate.cfg.Configuration object. Configuration represents,
essentially, a single point for specifying all aspects of building the SessionFactory : everything from settings, to mappings, to
strategies, etc. I like to think of Configuration as a big pot to which we add a bunch of stuff (mappings, settings, etc) and from
which we eventually get a SessionFactory.
There are some significant draw backs to this approach which led to its deprecation and the
development of the new approach, which is discussed in Native Bootstrapping. Configuration is semi-
deprecated but still available for use, in a limited form that eliminates these drawbacks. "Under the
covers", Configuration uses the new bootstrapping code, so the things available there as also available here
in terms of auto-discovery.
You can obtain the Configuration by instantiating it directly. You then specify mapping metadata (XML mapping documents,
annotated classes) that describe your applications object model and its mapping to a SQL database.
JAVA
Configuration cfg = new Configuration ()
// addResource does a classpath resource lookup
.addResource("Item.hbm.xml")
.addResource("Bid.hbm.xml")
.setProperty("hibernate.dialect", "org.hibernate.dialect.H2Dialect")
.setProperty("hibernate.connection.datasource", "java:comp/env/jdbc/test")
.setProperty("hibernate.order_updates", "true");
27. Migration
Configuration#addFile Configuration#addFile
Configuration#add(XmlDocument) Configuration#add(XmlDocument)
Configuration#addXML Configuration#addXML
Configuration#addCacheableFile Configuration#addCacheableFile
Configuration#addURL Configuration#addURL
Configuration#addInputStream Configuration#addInputStream
Configuration#addResource Configuration#addResource
Configuration#addClass Configuration#addClass
Configuration#addAnnotatedClass Configuration#addAnnotatedClass
Configuration#addPackage Configuration#addPackage
Configuration#addJar Configuration#addJar
Configuration#addDirectory Configuration#addDirectory
Configuration#registerTypeContributor Configuration#registerTypeContributor
Configuration#registerTypeOverride Configuration#registerTypeOverride
Configuration#setProperty Configuration#setProperty
Configuration#setProperties Configuration#setProperties
Configuration#addProperties Configuration#addProperties
Configuration#setNamingStrategy Configuration#setNamingStrategy
Configuration#setImplicitNamingStrategy Configuration#setImplicitNamingStrategy
Configuration#setPhysicalNamingStrategy Configuration#setPhysicalNamingStrategy
Configuration#configure Configuration#configure
Configuration#setInterceptor Configuration#setInterceptor
Configuration#setEntityNotFoundDelegate Configuration#setEntityNotFoundDelegate
Configuration#setSessionFactoryObserver Configuration#setSessionFactoryObserver
Configuration#setCurrentTenantIdentifierResolver Configuration#setCurrentTenantIdentifierResolver
XML
<!--
~ Hibernate, Relational Persistence for Idiomatic Java
~
~ License: GNU Lesser General Public License (LGPL), version 2.1 or later.
~ See the lgpl.txt file in the root directory or <http://www.gnu.org/licenses/lgpl-2.1.html>.
-->
<version
column="version_column"
name="propertyName"
type="typename"
access="field|property|ClassName"
unsaved-value="null|negative|undefined"
generated="never|always"
insert="true|false"
node="element-name|@attribute-name|element/@attribute|."
/>
<!--
~ Hibernate, Relational Persistence for Idiomatic Java
~
~ License: GNU Lesser General Public License (LGPL), version 2.1 or later.
~ See the lgpl.txt file in the root directory or <http://www.gnu.org/licenses/lgpl-2.1.html>.
-->
<timestamp
column="timestamp_column"
name="propertyName"
access="field|property|ClassName"
unsaved-value="null|undefined"
source="vm|db"
generated="never|always"
node="element-name|@attribute-name|element/@attribute|."
/>
This appendix covers the legacy Hibernate org.hibernate.Criteria API, which should be considered
deprecated.
New development should focus on the JPA javax.persistence.criteria.CriteriaQuery API. Eventually, Hibernate-
specific criteria features will be ported as extensions to the JPA javax.persistence.criteria.CriteriaQuery . For
details on the JPA APIs, see Criteria.
JAVA
Criteria crit = sess.createCriteria(Cat.class );
crit.setMaxResults(50);
List cats = crit.list();
JAVA
@Entity(name = "ApplicationEvent")
public static class Event {
@Id
private Long id;
JAVA
List<Event > events =
entityManager.unwrap( Session .class )
.createCriteria( "ApplicationEvent" )
.list();
BASH
org.hibernate.MappingException : Unknown entity: ApplicationEvent
On the other hand, the Hibernate entity name (the fully qualified class name) works just fine:
JAVA
List<Event > events =
entityManager.unwrap( Session .class )
.createCriteria( Event .class .getName() )
.list();
For more about this topic, check out the HHH-2597 (https://hibernate.atlassian.net/browse/HHH-2597) JIRA issue.
org.hibernate.criterion.Restrictions defines factory methods for obtaining certain built-in Criterion types.
JAVA
List cats = sess.createCriteria(Cat.class )
.add( Restrictions .like("name", "Fritz%") )
.add( Restrictions .between("weight", minWeight, maxWeight) )
.list();
JAVA
List cats = sess.createCriteria(Cat.class )
.add( Restrictions .like("name", "Fritz%") )
.add( Restrictions .or(
Restrictions .eq( "age", new Integer (0) ),
Restrictions .isNull("age")
) )
.list();
JAVA
List cats = sess.createCriteria(Cat.class )
.add( Restrictions .in( "name", new String [] { "Fritz", "Izi", "Pk" } ) )
.add( Restrictions .disjunction()
.add( Restrictions .isNull("age") )
.add( Restrictions .eq("age", new Integer (0) ) )
.add( Restrictions .eq("age", new Integer (1) ) )
.add( Restrictions .eq("age", new Integer (2) ) )
) )
.list();
There are a range of built-in criterion types ( Restrictions subclasses). One of the most useful Restrictions allows you to
specify SQL directly.
JAVA
List cats = sess.createCriteria(Cat.class )
.add( Restrictions .sqlRestriction("lower({alias}.name) like lower(?)", "Fritz%", Hibernate .STRING) )
.list();
The {alias} placeholder will be replaced by the row alias of the queried entity.
You can also obtain a criterion from a Property instance. You can create a Property by calling Property.forName() :
JAVA
Property age = Property .forName("age");
List cats = sess.createCriteria(Cat.class )
.add( Restrictions .disjunction()
.add( age.isNull() )
.add( age.eq( new Integer (0) ) )
.add( age.eq( new Integer (1) ) )
.add( age.eq( new Integer (2) ) )
) )
.add( Property .forName("name").in( new String [] { "Fritz", "Izi", "Pk" } ) )
.list();
JAVA
List cats = sess.createCriteria(Cat.class )
.add( Property .forName("name").like("F%") )
.addOrder( Property .forName("name").asc() )
.addOrder( Property .forName("age").desc() )
.setMaxResults(50)
.list();
29.5. Associations
By navigating associations using createCriteria() you can specify constraints upon related entities:
JAVA
List cats = sess.createCriteria(Cat.class )
.add( Restrictions .like("name", "F%") )
.createCriteria("kittens")
.add( Restrictions .like("name", "F%") )
.list();
The second createCriteria() returns a new instance of Criteria that refers to the elements of the kittens collection.
JAVA
List cats = sess.createCriteria(Cat.class )
.createAlias("kittens", "kt")
.createAlias("mate", "mt")
.add( Restrictions .eqProperty("kt.name", "mt.name") )
.list();
The kittens collections held by the Cat instances returned by the previous two queries are not pre-filtered by the criteria. If you
want to retrieve just the kittens that match the criteria, you must use a ResultTransformer .
JAVA
List cats = sess.createCriteria(Cat.class )
.createCriteria("kittens", "kt")
.add( Restrictions .eq("name", "F%") )
.setResultTransformer(Criteria .ALIAS_TO_ENTITY_MAP)
.list();
Iterator iter = cats.iterator();
while ( iter.hasNext() ) {
Map map = (Map) iter.next();
Cat cat = (Cat) map.get(Criteria .ROOT_ALIAS);
Cat kitten = (Cat) map.get("kt");
}
Additionally, you may manipulate the result set using a left outer join:
This will return all of the `Cat`s with a mate whose name starts with "good" ordered by their mate’s age, and all cats who do not
have a mate. This is useful when there is a need to order or limit in the database prior to returning complex/large result sets, and
removes many instances where multiple queries would have to be performed and the results unioned by Java in memory.
Without this feature, first all of the cats without a mate would need to be loaded in one query.
A second query would need to retrieve the cats with mates who’s name started with "good" sorted by the mates age.
JAVA
List cats = sess.createCriteria(Cat.class )
.add( Restrictions .like("name", "Fritz%") )
.setFetchMode("mate", FetchMode .EAGER)
.setFetchMode("kittens", FetchMode .EAGER)
.list();
This query will fetch both mate and kittens by outer join.
29.7. Components
To add a restriction against a property of an embedded component, the component property name should be prepended to the
property name when creating the Restriction . The criteria object should be created on the owning entity, and cannot be
created on the component itself. For example, suppose the Cat has a component property fullName with sub-properties
firstName and lastName :
Note: this does not apply when querying collections of components, for that see below Collections
29.8. Collections
When using criteria against collections, there are two distinct cases. One is if the collection contains entities (eg. <one-to-many/>
or <many-to-many/> ) or components ( <composite-element/> ), and the second is if the collection contains scalar values
( <element/> ). In the first case, the syntax is as given above in the section Associations where we restrict the kittens collection.
Essentially, we create a Criteria object against the collection property and restrict the entity or component properties using
that instance.
For querying a collection of basic values, we still create the Criteria object against the collection, but to reference the value, we
use the special property "elements". For an indexed collection, we can also reference the index property using the special
property "indices".
JAVA
Cat cat = new Cat();
cat.setSex('F');
cat.setColor(Color .BLACK);
List results = session.createCriteria(Cat.class )
.add( Example .create(cat) )
.list();
Version properties, identifiers and associations are ignored. By default, null valued properties are excluded.
JAVA
Example example = Example .create(cat)
.excludeZeroes() //exclude zero valued properties
.excludeProperty("color") //exclude the property named "color"
.ignoreCase() //perform case insensitive string comparisons
.enableLike(); //use like for string comparisons
List results = session.createCriteria(Cat.class )
.add(example)
.list();
You can even use examples to place criteria upon associated objects.
JAVA
List results = session.createCriteria(Cat.class )
.add( Example .create(cat) )
.createCriteria("mate")
.add( Example .create( cat.getMate() ) )
.list();
JAVA
List results = session.createCriteria(Cat.class )
.setProjection( Projections .rowCount() )
.add( Restrictions .eq("color", Color .BLACK) )
.list();
JAVA
List results = session.createCriteria(Cat.class )
.setProjection( Projections .projectionList()
.add( Projections .rowCount() )
.add( Projections .avg("weight") )
.add( Projections .max("weight") )
.add( Projections .groupProperty("color") )
)
.list();
There is no explicit "group by" necessary in a criteria query. Certain projection types are defined to be grouping projections, which
also appear in the SQL group by clause.
An alias can be assigned to a projection so that the projected value can be referred to in restrictions or orderings. Here are two
different ways to do this:
JAVA
List results = session.createCriteria(Cat.class )
.setProjection( Projections .alias( Projections .groupProperty("color"), "colr" ) )
.addOrder( Order .asc("colr") )
.list();
JAVA
List results = session.createCriteria(Cat.class )
.setProjection( Projections .groupProperty("color").as("colr") )
.addOrder( Order .asc("colr") )
.list();
The alias() and as() methods simply wrap a projection instance in another, aliased, instance of Projection . As a shortcut,
you can assign an alias when you add the projection to a projection list:
JAVA
List results = session.createCriteria(Cat.class )
.setProjection( Projections .projectionList()
.add( Projections .rowCount(), "catCountByColor" )
.add( Projections .avg("weight"), "avgWeight" )
.add( Projections .max("weight"), "maxWeight" )
.add( Projections .groupProperty("color"), "color" )
)
.addOrder( Order .desc("catCountByColor") )
.addOrder( Order .desc("avgWeight") )
.list();
JAVA
List results = session.createCriteria(Domestic .class , "cat")
.createAlias("kittens", "kit")
.setProjection( Projections .projectionList()
.add( Projections .property("cat.name"), "catName" )
.add( Projections .property("kit.name"), "kitName" )
)
.addOrder( Order .asc("catName") )
.addOrder( Order .asc("kitName") )
.list();
JAVA
List results = session.createCriteria(Cat.class )
.setProjection( Property .forName("name") )
.add( Property .forName("color").eq(Color .BLACK) )
.list();
JAVA
DetachedCriteria query = DetachedCriteria .forClass(Cat.class )
.add( Property .forName("sex").eq('F') );
A DetachedCriteria can also be used to express a subquery. Criterion instances involving subqueries can be obtained via
Subqueries or Property .
JAVA
DetachedCriteria avgWeight = DetachedCriteria .forClass(Cat.class )
.setProjection( Property .forName("weight").avg() );
session.createCriteria(Cat.class )
.add( Property .forName("weight").gt(avgWeight) )
.list();
JAVA
DetachedCriteria weights = DetachedCriteria .forClass(Cat.class )
.setProjection( Property .forName("weight") );
session.createCriteria(Cat.class )
.add( Subqueries .geAll("weight", weights) )
.list();
JAVA
DetachedCriteria avgWeightForSex = DetachedCriteria .forClass(Cat.class , "cat2")
.setProjection( Property .forName("weight").avg() )
.add( Property .forName("cat2.sex").eqProperty("cat.sex") );
session.createCriteria(Cat.class , "cat")
.add( Property .forName("weight").gt(avgWeightForSex) )
.list();
First, map the natural key of your entity using <natural-id> and enable use of the second-level cache.
XML
<class name="User">
<cache usage="read-write"/>
<id name="id">
<generator class="increment"/>
</id>
<natural-id>
<property name="name"/>
<property name="org"/>
</natural-id>
<property name="password"/>
</class>
This functionality is not intended for use with entities with mutable natural keys.
Once you have enabled the Hibernate query cache, the Restrictions.naturalId() allows you to make use of the more
efficient cache algorithm.
JAVA
session.createCriteria(User.class )
.add( Restrictions .naturalId()
.set("name", "gavin")
.set("org", "hb")
).setCacheable(true)
.uniqueResult();
Example 671. Named sql query using the <sql-query> mapping element
JAVA
List people = session
.getNamedQuery( "persons" )
.setParameter( "namePattern", namePattern )
.setMaxResults( 50 )
.list();
The <return-join> element is use to join associations and the <load-collection> element is used to define queries which
initialize collections.
XML
<sql-query name = "personsWith">
<return alias="person" class="eg.Person"/>
<return-join alias="address" property="person.mailingAddress"/>
SELECT person.NAME AS {person.name},
person.AGE AS {person.age},
person.SEX AS {person.sex},
address.STREET AS {address.street},
address.CITY AS {address.city},
address.STATE AS {address.state},
address.ZIP AS {address.zip}
FROM PERSON person
JOIN ADDRESS address
ON person.ID = address.PERSON_ID AND address.TYPE='MAILING'
WHERE person.NAME LIKE :namePattern
</sql-query>
A named SQL query may return a scalar value. You must declare the column alias and Hibernate type using the <return-
scalar> element:
XML
<sql-query name = "mySqlQuery">
<return-scalar column = "name" type="string"/>
<return-scalar column = "age" type="long"/>
SELECT p.NAME AS name,
p.AGE AS age,
FROM PERSON p WHERE p.NAME LIKE 'Hiber%'
</sql-query>
You can externalize the resultset mapping information in a <resultset> element which will allow you to either reuse them
across several named queries or through the setResultSetMapping() API.
XML
<resultset name = "personAddress">
<return alias="person" class="eg.Person"/>
<return-join alias="address" property="person.mailingAddress"/>
</resultset>
You can, alternatively, use the resultset mapping information in your hbm files directly in Java code.
JAVA
List cats = session
.createSQLQuery( "select {cat.*}, {kitten.*} from cats cat, cats kitten where kitten.mother = cat.id" )
.setResultSetMapping("catAndKitten")
.list();
XML
<sql-query name = "mySqlQuery">
<return alias = "person" class = "eg.Person">
<return-property name = "name" column = "myName"/>
<return-property name = "age" column = "myAge"/>
<return-property name = "sex" column = "mySex"/>
</return>
SELECT person.NAME AS myName,
person.AGE AS myAge,
person.SEX AS mySex,
FROM PERSON person WHERE person.NAME LIKE :name
</sql-query>
<return-property> also works with multiple columns. This solves a limitation with the {} syntax which cannot allow fine
grained control of multi-column properties.
In this example <return-property> was used in combination with the {} syntax for injection. This allows users to choose how
they want to refer column and properties.
If your mapping has a discriminator you must use <return-discriminator> to specify the discriminator column.
XML
CREATE OR REPLACE FUNCTION selectAllEmployments
RETURN SYS_REFCURSOR
AS
st_cursor SYS_REFCURSOR;
BEGIN
OPEN st_cursor FOR
SELECT EMPLOYEE, EMPLOYER,
STARTDATE, ENDDATE,
REGIONCODE, EID, VALUE, CURRENCY
FROM EMPLOYMENT;
RETURN st_cursor;
END;
To use this query in Hibernate you need to map it via a named query.
XML
<sql-query name = "selectAllEmployees_SP" callable = "true">
<return alias="emp" class="Employment">
<return-property name = "employee" column = "EMPLOYEE"/>
<return-property name = "employer" column = "EMPLOYER"/>
<return-property name = "startDate" column = "STARTDATE"/>
<return-property name = "endDate" column = "ENDDATE"/>
<return-property name = "regionCode" column = "REGIONCODE"/>
<return-property name = "id" column = "EID"/>
<return-property name = "salary">
<return-column name = "VALUE"/>
<return-column name = "CURRENCY"/>
</return-property>
</return>
{ ? = call selectAllEmployments() }
</sql-query>
Stored procedures currently only return scalars and entities. <return-join> and <load-collection> are not supported.
The rules are different for each database since database vendors have different stored procedure semantics/syntax.
A function must return a result set. The first parameter of a procedure must be an OUT that returns a result set. This is done
by using a SYS_REFCURSOR type in Oracle 9 or 10. In Oracle you need to define a REF CURSOR type. See Oracle literature for
further information.
The procedure must return a result set. Note that since these servers can return multiple result sets and update counts,
Hibernate will iterate the results and take the first result that is a result set as its return value. Everything else will be
discarded.
If you can enable SET NOCOUNT ON in your procedure it will probably be more efficient, but this is not a requirement.
XML
<class name = "Person">
<id name = "id">
<generator class = "increment"/>
</id>
<property name = "name" not-null = "true"/>
<sql-insert> INSERT INTO PERSON (NAME, ID) VALUES ( UPPER(?), ? )</sql-insert>
<sql-update> UPDATE PERSON SET NAME=UPPER(?) WHERE ID=?</sql-update>
<sql-delete> DELETE FROM PERSON WHERE ID=?</sql-delete>
</class>
If you expect to call a store procedure, be sure to set the callable attribute to true , in annotations
as well as in xml.
To check that the execution happens correctly, Hibernate allows you to define one of those three strategies:
none: no check is performed: the store procedure is expected to fail upon issues
param: like COUNT but using an output parameter rather that the standard mechanism
To define the result check style, use the check parameter which is again available in annotations as well as in xml.
Last but not least, stored procedures are in most cases required to return the number of rows inserted, updated and deleted.
Hibernate always registers the first statement parameter as a numeric output parameter for the CUD operations:
update PERSON
set
NAME = uname,
where
ID = uid;
return SQL%ROWCOUNT;
END updatePerson;
XML
<sql-query name = "person">
<return alias = "pers" class = "Person" lock-mod e= "upgrade"/>
SELECT NAME AS {pers.name}, ID AS {pers.id}
FROM PERSON
WHERE ID=?
FOR UPDATE
</sql-query>
This is just a named query declaration, as discussed earlier. You can reference this named query in a class mapping:
XML
<class name = "Person">
<id name = "id">
<generator class = "increment"/>
</id>
<property name = "name" not-null = "true"/>
<loader query-ref = "person"/>
</class>
XML
<set name = "employments" inverse = "true">
<key/>
<one-to-many class = "Employment"/>
<loader query-ref = "employments"/>
</set>
XML
<sql-query name = "employments">
<load-collection alias = "emp" role = "Person.employments"/>
SELECT {emp.*}
FROM EMPLOYMENT emp
WHERE EMPLOYER = :id
ORDER BY STARTDATE ASC, EMPLOYEE ASC
</sql-query>
You can also define an entity loader that loads a collection by join fetching:
XML
<sql-query name = "person">
<return alias = "pers" class = "Person"/>
<return-join alias = "emp" property = "pers.employments"/>
SELECT NAME AS {pers.*}, {emp.*}
FROM PERSON pers
LEFT OUTER JOIN EMPLOYMENT emp
ON pers.ID = emp.PERSON_ID
WHERE ID=?
</sql-query>
31. References
[PoEAA] Martin Fowler. Patterns of Enterprise Application Architecture. Addison-Wesley Publishing Company. 2003.
[JPwH] Christian Bauer & Gavin King. Java Persistence with Hibernate (http://www.manning.com/bauer2). Manning Publications
Co. 2007.