Es un sistema de distribución de estructuras de datos Java en memoria.
Altamente escalable en entornos de Cluster y Grid, usando arquitectura distributed hashtable (DHT)
DHT
Características (1)
Existen Dos versiones
Community
Apache License 2.0
Enterprise
Características adicionales
Management Console
Elastic Memory
JAAS
Es Portable
Es Java
Incorpora soporte para estadísticas y eventos de miembros del cluster
Permite crear clusters dinámicos
Dynamic fail-over
Dynamic HTTP session clustering
Dynamic scaling to hundreds of servers
Dynamic partitioning with backups
Super rápido; miles de operaciones por segundo
Super eficiente; poco consumo de memoria y CPU
La configuración por defecto incluye 1 backup de todo, aunque es configurable
Sobre comunicaciones entre miembros del cluster
Redes
Multicast
TCP/IP
Soporta comunicaciones SSL
IO
Comunicaciones entre los miembros del cluster siempre con Java.NIO
Características (2)
Implementación Distribuida de Clases Java: Map, Set, Queue, List, Lock, Executor Service
Implementación Distribuida de Tópicos en mensajería pub/sub
Implementación Distribuida de listeners y events
Soporte a operaciones transaccionales en arquitecturas J2EE (JCA)
Muy Ligero (un único jar de 1'5M)
Usos comunes
Compartir datos/estados entre múltiples servidores (web session sharing)
Cachear los datos de forma distribuida para mejorar el rendimiento
Facilitar la Alta disponibilidad de la aplicación mediante cluster
Proporcionar comunicaciones seguras entre servidores
Particionar los datos en memoria
Enviar y Recibir mensajes entre aplicaciones
Distribuir la carga de trabajo entre servidores
Aplicar a la aplicación procesamiento paralelo
Proporcionar gestión de datos tolerante a fallos
Arquitectura
Stand Alone
run.sh
Embebido
Dentro de la aplicación
Cliente
SuperCliente
Nodo
Como recurso en una aplicación J2EE
Grid
Nodo... ¿Maestro?
There is no single cluster master or something that can cause single point of failure.
Every node in the cluster has equal rights and responsibilities.
No-one is superior. And no dependency on external 'server' or 'master' kind of concept.
<symmetric-encryption enabled="false">
<!--
encryption algorithm such as
DES/ECB/PKCS5Padding,
PBEWithMD5AndDES,
AES/CBC/PKCS5Padding,
Blowfish,
DESede
-->
<algorithm>PBEWithMD5AndDES</algorithm>
<!-- salt value to use when generating the secret key -->
<salt>thesalt</salt>
<!-- pass phrase to use when generating the secret key -->
<password>thepass</password>
<!-- iteration count to use when generating the secret key -->
<iteration-count>19</iteration-count>
</symmetric-encryption>
Asimétrico
<asymmetric-encryption enabled="false">
<!-- encryption algorithm -->
<algorithm>RSA/NONE/PKCS1PADDING</algorithm>
<!-- private key password -->
<keyPassword>thekeypass</keyPassword>
<!-- private key alias -->
<keyAlias>local</keyAlias>
<!-- key store type -->
<storeType>JKS</storeType>
<!-- key store password -->
<storePassword>thestorepass</storePassword>
<!-- path to the key store -->
<storePath>keystore</storePath>
</asymmetric-encryption>
AWS?
<aws enabled="false">
<access-key>my-access-key</access-key>
<secret-key>my-secret-key</secret-key>
<!--optional, default is us-east-1 -->
<region>us-west-1</region>
<!-- optional, only instances belonging to this group will be discovered, default will try all running instances -->
<security-group-name>hazelcast-sg</security-group-name>
<tag-key>type</tag-key>
<tag-value>hz-nodes</tag-value>
</aws>
Executor framework,
which allows you to asynchronously execute your tasks,
logical units of works, such as database query,
complex calculation, image rendering etc.
Configuración de
Colas
Mapas
Semáforos
Edición del fichero hazelcast.xml
<?xml version="1.0" encoding="UTF-8"?>
<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-basic.xsd"
xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<group>
<name>dev</name>
<password>dev-pass</password>
</group>
<network>
<port auto-increment="true">5701</port>
<join>
<multicast enabled="true">
<multicast-group>224.2.2.3</multicast-group>
<multicast-port>54327</multicast-port>
</multicast>
<tcp-ip enabled="false">
<hostname>tsort.local</hostname>
<interface>10.0.2.10</interface>
</tcp-ip>
<aws enabled="false">
<access-key>my-access-key</access-key>
<secret-key>my-secret-key</secret-key>
<!--optional, default is us-east-1 -->
<region>us-west-1</region>
<!-- optional, only instances belonging to this group will be discovered, default will try all running instances -->
<security-group-name>hazelcast-sg</security-group-name>
<tag-key>type</tag-key>
<tag-value>hz-nodes</tag-value>
</aws>
</join>
<interfaces enabled="true">
<interface>10.0.2.*</interface>
</interfaces>
<symmetric-encryption enabled="false">
<!--
encryption algorithm such as
DES/ECB/PKCS5Padding,
PBEWithMD5AndDES,
AES/CBC/PKCS5Padding,
Blowfish,
DESede
-->
<algorithm>PBEWithMD5AndDES</algorithm>
<!-- salt value to use when generating the secret key -->
<salt>thesalt</salt>
<!-- pass phrase to use when generating the secret key -->
<password>thepass</password>
<!-- iteration count to use when generating the secret key -->
<iteration-count>19</iteration-count>
</symmetric-encryption>
<asymmetric-encryption enabled="false">
<!-- encryption algorithm -->
<algorithm>RSA/NONE/PKCS1PADDING</algorithm>
<!-- private key password -->
<keyPassword>thekeypass</keyPassword>
<!-- private key alias -->
<keyAlias>local</keyAlias>
<!-- key store type -->
<storeType>JKS</storeType>
<!-- key store password -->
<storePassword>thestorepass</storePassword>
<!-- path to the key store -->
<storePath>keystore</storePath>
</asymmetric-encryption>
</network>
<executor-service>
<core-pool-size>16</core-pool-size>
<max-pool-size>64</max-pool-size>
<keep-alive-seconds>60</keep-alive-seconds>
</executor-service>
<queue name="default">
<!--
Maximum size of the queue. When a JVM's local queue size reaches the maximum,
all put/offer operations will get blocked until the queue size
of the JVM goes down below the maximum.
Any integer between 0 and Integer.MAX_VALUE. 0 means
Integer.MAX_VALUE. Default is 0.
-->
<max-size-per-jvm>0</max-size-per-jvm>
<!--
Name of the map configuration that will be used for the backing distributed
map for this queue.
-->
<backing-map-ref>default</backing-map-ref>
</queue>
<map name="default">
<!--
Number of backups. If 1 is set as the backup-count for example,
then all entries of the map will be copied to another JVM for
fail-safety. 0 means no backup.
-->
<backup-count>1</backup-count>
<!--
Maximum number of seconds for each entry to stay in the map. Entries that are
older than <time-to-live-seconds> and not updated for <time-to-live-seconds>
will get automatically evicted from the map.
Any integer between 0 and Integer.MAX_VALUE. 0 means infinite. Default is 0.
-->
<time-to-live-seconds>0</time-to-live-seconds>
<!--
Maximum number of seconds for each entry to stay idle in the map. Entries that are
idle(not touched) for more than <max-idle-seconds> will get
automatically evicted from the map. Entry is touched if get, put or containsKey is called.
Any integer between 0 and Integer.MAX_VALUE. 0 means infinite. Default is 0.
-->
<max-idle-seconds>0</max-idle-seconds>
<!--
Valid values are:
NONE (no eviction),
LRU (Least Recently Used),
LFU (Least Frequently Used).
NONE is the default.
-->
<eviction-policy>NONE</eviction-policy>
<!--
Maximum size of the map. When max size is reached,
map is evicted based on the policy defined.
Any integer between 0 and Integer.MAX_VALUE. 0 means
Integer.MAX_VALUE. Default is 0.
-->
<max-size policy="cluster_wide_map_size">0</max-size>
<!--
When max. size is reached, specified percentage of
the map will be evicted. Any integer between 0 and 100.
If 25 is set for example, 25% of the entries will
get evicted.
-->
<eviction-percentage>25</eviction-percentage>
<!--
While recovering from split-brain (network partitioning),
map entries in the small cluster will merge into the bigger cluster
based on the policy set here. When an entry merge into the
cluster, there might an existing entry with the same key already.
Values of these entries might be different for that same key.
Which value should be set for the key? Conflict is resolved by
the policy set here. Default policy is hz.ADD_NEW_ENTRY
There are built-in merge policies such as
hz.NO_MERGE ; no entry will merge.
hz.ADD_NEW_ENTRY ; entry will be added if the merging entry's key
doesn't exist in the cluster.
hz.HIGHER_HITS ; entry with the higher hits wins.
hz.LATEST_UPDATE ; entry with the latest update wins.
-->
<merge-policy>hz.ADD_NEW_ENTRY</merge-policy>
</map>
<!-- Add your own semaphore configurations here:
<semaphore name="default">
<initial-permits>10</initial-permits>
<semaphore-factory enabled="true">
<class-name>com.acme.MySemaphoreFactory</class-name>
</semaphore-factory>
</semaphore>
-->
<!-- Add your own map merge policy implementations here:
<merge-policies>
<map-merge-policy name="MY_MERGE_POLICY">
<class-name>com.acme.MyOwnMergePolicy</class-name>
</map-merge-policy>
</merge-policies>
-->
</hazelcast>
Práctica
StandAlone
Ejecutar el cliente run.sh
añadir
mapas
m.put key value
listas
l.add item
sets
s.add item
queues
q.offer string
borrar
mapas
m.remove key
listas
l.remove item
sets
s.remove item
queues
q.poll
realizar bloqueos
lock key
trylock key
unlock key
Ejecutar un cliente Java que realice acciones
import com.hazelcast.core.HazelcastInstance;
import com.hazelcast.client.HazelcastClient;
import java.util.Map;
import java.util.Collection;
class HelloHazelcast {
public static void main(String[] args) throws Exception {
// If the connected member dies client will
// switch to the next one in the list.
HazelcastInstance client = HazelcastClient.newHazelcastClient("dev", "dev-pass", "XXXX","YYYY:5702","ZZZZ");
//All Cluster Operations that you can do with ordinary HazelcastInstance
Map<String, String> mapCustomers = client.getMap("customers");
mapCustomers.put("1", "Joe Smith");
mapCustomers.put("2", "Ali Selam");
mapCustomers.put("3", "Avi Noyan");
mapCustomers.put("4", "San Carter");
mapCustomers.put("5", "Samantha Carter");
Collection<String> colCustomers = mapCustomers.values();
for (String customer : colCustomers) {
// process customer
System.out.println("Customer fullName is: " + customer);
}
// Exiting...
client.shutdown();
}
}
Arrancar varios nodos del cluster en una máquina
Arrancar otros nodos del cluster usando la wifi del Mac y un sistema de ficheros compartido