Skip to content

During last weeks, some local building contractor was involved in reconstruction of our house and renewing the basement drainage. As a result the new basement drainage has been installed and a new hopper for the sewage pumps has been placed. To be honest, it is a lot of heavy work, performed by the builder including digging, insulating, pounding and other dirty staff – but after all the system works if sewage pump is removing the drainage water. In constrast to the earthworks, where you need much experience and human force, which I don’t have, I took over the plumbing and electrical work on the pumps. In order to have a fail-over system, I installed two pumps, where the second pump is triggered if the first one fails. In order to be able to operate on a short circuit of the first pump, I put the second pump on a separate phase (in Europe, we have three phases power supply, 220V each, shifted by 120° to each other, not like split-phase in US). Having this system installed, you get some periodic work to do: finally you want to make sure by regular testing procedures, that the second pump is operating if the first one has failed. Since I’m lazy and like inventing and constucting stuff more than executing regular test procedures, I decided to implement a monitoring system using some cheap electronic and computer components: Raspberry Pi, Tinkerforge Hardware. continue reading…

Ranked

Just started to develop a small application in Scala running on a standard enterprise java stack. You can find more details on github: https://github.com/holisticon/ranked.

Today, I’ll post some details on the persistence layer. The idea is to use Scala case classes and JPA as a persistence layer. For simple attributes it looks very cute:

/**
 * Superclass for all persistent entities.
 */
@MappedSuperclass
abstract class PersistentEntity(
  @BeanProperty @(Column @field)(name = "ID")@(Id @field)@(GeneratedValue @field)(strategy = GenerationType.AUTO) id: Long,
  @BeanProperty @(Column @field)(name = "VERSION")@(Version @field) version: Long) {
  def this() = this(-1, -1);
}

/**
 * Represents a player. A player has a name, initial and current ELO ranking.
 */
@Entity
@Table(name = "PLAYER")
case class Player(
  @BeanProperty @(Column @field)(name = "NAME") name: String) extends PersistentEntity {

  def this() = this(null);
}

/**
 * Represents a team. A team contains of et least one player and might have a name.
 */
@Entity
@Table(name = "TEAM")
case class Team(
  @BeanProperty @(Column @field)(name = "NAME") name: String) extends PersistentEntity {

  def this() = this(null);
}

Stay tuned about the development progress.

If you are interested in Xtext and its new features introduced in the upcomming version 2.0 you might want to install and try them out. Since it will be officially realeased together with Eclipse Indigo, you have to execute some manual steps. In order to be able to install the new feature, you will require to enter two additional update sites into you update manager and download the update site containing xtext itself. The following steps worked for me:

Thanks to Dennis Huebner for the hints….


JFace Databinding enables an easy binding between values inside of data models and SWT/JFace widgets. No more boring listeners to implement – just create observables and connect them using the data binding context. There are several brilliant articles written about it. My favorites are those from Ralf Ebert and Lars Vogel.

One of the interesting aspects of databinding is data validation. The update strategies, responsible for propagation of changes in models or in widgets can be supplied with validators, making sure that the data changes are legal. In the same time the JSR-303 Bean Validation specification focuses on a modern standardized way of data validation. In this post, I combine these subjects and use JSR-303 in JFace Databinding Validators.

One of the core insights of the JSR-303 is the idea of annotation of data validation constraints on data itself. It is indeed a good observation, that validation code strongly relies on the data structure and semantics. To follow this idea consequently, the application developer should care of validation during implementation of business logic as less as possible. A much better idea is to encapsulate the entire validation into domain-specific types. Let me demonstrate it by example, imagine the following class:

public class Customer {
  private String name;
  private String address;
  private String zip;
  private String city;
}

This is perfectly reasonable, but now consider not only the data storage/transport aspects, but also the validation aspects. A standard approach would be to use the following validator logic, in the databinding:

public class CustomerComposite {
[...]
  public void bindValues(Customer model, DataBindingContext dbc) {
    UpdateValueStrategy m2t = new UpdateValueStrategy();
    m2t.setAfterGetValidator(new IValidator() {
      @Override
      public IStatus validate(Object value) {
        String name = (String) value;
        if (name == null || Helper.isRegex(name, "[A-Za-z -]*")) {
          return ValidationStatus.error("Wrong name");
        }
          return ValidationStatus.ok();
        }
      });
    dbc.bindValue(WidgetProperties.text(SWT.Modify).observe(namefield),
      BeanProperties.value(Customer.class, "name").observe(model),
      new UpdateValueStrategy(), m2t);

    m2t = new UpdateValueStrategy();
    m2t.setAfterGetValidator(new IValidator() {
      @Override
      public IStatus validate(Object value) {
        String zipCode = (String) value;
        if (zipCode == null || zipCode.length() > 5 || zipCode.length() < 5 || Helper.isRegex(zipCode, "[0-9]*")) {
          return ValidationStatus.error("Wrong zip code");
        }
          return ValidationStatus.ok();
        }
      });
    dbc.bindValue(WidgetProperties.text(SWT.Modify).observe(zipfield),
      BeanProperties.value(Customer.class, "zip").observe(model),
      new UpdateValueStrategy(), m2t);
  [...]
  }

Pretty much code, and rememeber that JFace Databinding code like this can not be reused in other parts of the application. Let’s put the validation logic on the data declaration in a way how JSR-303 proposes to do this:

public class Customer {
  @NotNull
  @Pattern(regexp = "[A-Za-z -]*")
  private String name;
  private String addressLine;
  @Size(min=1, max=5)
  @Pattern(regexp = "[0-9]*")
  private String zip;
  @NotNull
  @Pattern(regexp = "[A-Za-z -]*")
  private String city;
}

As a next step, let us develop an update strategy factory which create update strategies with embedded Validator for JSR-303 Bean Validation constaints.

public class BeanValidator implements IValidator {
  private ValidatorFactory factory = Validation.buildDefaultValidatorFactory();

  @Override
  public IStatus validate(Object value) {
    Set<ConstraintViolation<Object>> violations = factory.getValidator().validate(value,
      new Class<?>[] { Default.class });
    if (violations.size() > 0) {
      List<IStatus> statusList = new ArrayList<IStatus>();
      for (ConstraintViolation<Object> cv : violations) {
        statusList.add(ValidationStatus.error(cv.getMessage()));
      }
      return new MultiStatus(Activator.PLUGIN_ID, IStatus.ERROR,
        statusList.toArray(new IStatus[statusList.size()]), "Validation errors", null);
    }
    return ValidationStatus.ok();
  }
}

public class StrategyFactory {
 public static UpdateValueStrategy getStrategy() {
   UpdateValueStrategy strategy = new UpdateValueStrategy();
   strategy.setAfterConvertValidator(new BeanValidator());
   return strategy;
 }
}

Using the StrategyFactory, the validation code inside of the composite becomes trivial:

public class CustomerComposite {
[...]
  public void bindValues(Customer model, DataBindingContext dbc) {
    dbc.bindValue(WidgetProperties.text(SWT.Modify).observe(namefield),
      BeanProperties.value(Customer.class, "name").observe(model),
      new UpdateValueStrategy(), StrategyFactory.getStrategy());

    dbc.bindValue(WidgetProperties.text(SWT.Modify).observe(zipfield),
     BeanProperties.value(Customer.class, "zip").observe(model),
     new UpdateValueStrategy(), StrategyFactory.getStrategy());
 [...]
}

An important property of the introduced validation approach is the fact, that it can be reused in other application layers (e.G. in service layer, or in data access layer). In other words you can use the same validation logic across the entire application and just remain valid…

Google Guice

Abstract

Development of Eclipse RCP as a rich client of the multi-tier Java Enterprise application becomes an an interesting alternative to other frontend technologies. An important aspect is the ability to develop and test the frontend independent from the backend. In this article, an approach for testing and de-coupling of server and client development in Eclipse RCP is introduced.

Business Delegate, Service Locator and Dependency Injection

In a Java multi-tier application, the business logic is implemented in form of server-hosted components (EJB, Spring Beans, OSGi Services, etc…). In this example, the EJB Backend is used, but it can be easily replaced with other technologies mentioned previously. A rich client is connected to the server using some remoting technology and contains a local storage for the client-specific state, which allows to build more complex and reactive applications. A common approach to hide the aspects of remote invocations on the client side is the use of Business Delegate enterprise design pattern. My favorite way of implementing it is to define the technology-independent business interface (POJI = Plain Old Java Interface) and implement it on the server side by the server beans and on the client side by the business delegates. This article uses the following business interface as example:

public interface MyBusinessService extends BaseBusinessService {

	/**
	 * Full qualified name of the interface (used for Binding).
	 */
	String IF_FQN = "....MyBusinessService";

	/**
	 * Does something on server.
	 *
	 * @param parameter Parameter of invocation.
	 * @return Result of execution.
	 */
	List<OperationResultDto> doSomeStuff(ParameterDto parameter);
}

The delegates make use of the Service Locator design pattern. Here is an example, how the implementation of the Base Facade can look like, which is a superclass of all business delegates:

public abstract class BaseFacade {
...
	/**
	 * Delegates a call to a stateless session bean on the server.
	 *
	 * @param <T> type business interface
	 * @param iterfaze business interface
	 * @param jndi binding on the server
	 * @return result of invocation
	 */
	public static <T> T delegate(Class<T> iterfaze, String jndi) {
		return Activator.getDefault().getServiceLocator().getStateless(iterfaze, jndi);
	}

	/** ... */
	public static void debug(String message) {...}
}

The ServiceLocator.getStateless() method is hiding the lookup of the remote EJB. Using the BaseFacade, the business delegate looks as following:

public class MyBusinessServiceFacade extends BaseFacade implements MyBusinessService {

	public List<OperationResultDto> doSomeStuff(ParameterDto parameter) {
		debug("entering doSomeStuff(ParameterDto)");
		final List<OperationResultDto> result = delegate(MyBusinessService.class, MyBusinessService.IF_FQN)
				.doSomeStuff(parameter);
		debug("leaving doSomeStuff(ParameterDto)");
		return result;
	}

}

This setup results in a following architecture:

Architecture

Business Delegate Boilerplate

The setup looks good in theory, but in fact it is pretty boring to program on the client side. You can reduce the effort of creating business delegates (in fact I use MDSD techniques and Xtext to generate it), but in every place a service is required, the business delegate is instantiated directly. The approach works, but it just not nice, because you reference the implementation directly.

A common approach to avoid writing the code of direct instantiation is the usage of Dependency Injection frameworks. A very popular one is Google Guice, which is used in this article. The essential idea of Google Guice is to configure the binding between the dependency and its resolution and use Google Guice as a kind of factory to create instances and inject dependencies in it. For the creation of the binding, Guice offers a class AbstractModule to subclass from.

public class ServiceFacadeModule extends AbstractModule {

	/**
	 * @see com.google.inject.AbstractModule#configure()
	 */
	@Override
	protected void configure() {
		bind(MyBusinessService.class).to(MyBusinessServiceFacade.class);
	}
}
...
public class InjectorHolder {
...
	private Injector injector;

	public static void configureInjector(AbstractModule module) {
		InjectorHolder.getInstance().setInjector(Guice.createInjector(module));
	}

	/**
	 * Creates an instance of a class.
	 *
	 * @param <T> type of the class
	 * @param clazz type to create
	 * @return a instance with injected dependencies
	 */
	public static <T> T get(Class<T> clazz) {
		return getInstance().getInjector().getInstance(clazz);
	}
...
}

In order to hide references to Guice classes in client code, the DI can be encapsulated inside of a InjectorHolder, which acts as factory for instances with service references:

/**
 * Class with a reference.
 */
public class DataSource {

	/**
	 * Reference to the service.
	 */
	private MyBusinessService myBusinessService;

	@Inject
	public void setMyBusinessService(MyBusinessService myBusinessService) {
		this.myBusinessService = myBusinessService;
	}
}
/**
 * Client which requires the data source with injected references.
 */
public class MyView {

	/**
	 * Let Google Guice create the instance and inject dependencies.
	 */
	private DataSource source = InjectorHolder.get(DataSource.class);
}

Please note, that the data source is using setter-injection for the service implementations and the InjectorHolder as a factory to create an instance of data source with injected reference.

Packaging Guice

After this short introduction of Guice, it is time to package this 3rd-party library into the Eclipse RCP client. In fact it is all about putting the JARs (guice-2.0.jar, aopalliance-1.0.jar) into some folder inside of the client plug-in and modifying the MANIFEST.MF so that the JARs are on the bundle class-path and the packages are listed as “exported”.

What about mock?

After the client has the ability to use business delegates it can access the business functionality of the server. In fact this requires that the server is already implemented. In order to decouple the client from the server development, mocks can be used. Mocks are popular in context of Unit tests, but can be used to simulate behavior of server implementation as well. Since mocks should not be delivered into production it is a good idea to put them into a separate mock plug-in, included into the special mock feature. The mock plug-in should export its packages. These should be imported by the main plug-in, instead of defining of a dependency on the mock plug-in directly. The mock feature is included in the product / top-level feature as an optional feature. This specific configuration allows the main plug-in to instantiate classes from the mock plug-in, if this is delivered, but doesn’t produce errors if the mock plug-in is not included into release.

Since the business delegate implementes the business interface, its mock should also do so:

public class MyBusinessServiceMock implements MyBusinessService {

	public List<OperationResultDto> doSomeStuff(ParameterDto parameter) {
		debug("entering doSomeStuff(ParameterDto)");
		final List<OperationResultDto> result = new ArrayList<OperationResultDto>();
		result.add(new OperationResult(parameter.getValue()));
		debug("leaving doSomeStuff(ParameterDto)");
		return result;
	}
...
}

Since the mocks should also be injected by Guice, we define the binding module as well.

public class ServiceMockModule extends AbstractModule {
	protected void configure() {
		bind(MyBusinessService.class).to(MyBusinessServiceMock.class);
	}
}

Finally, we got two implementations and two Guice’s AbstractModule implementation binding them. The last missing piece is the dynamic configuration which allows to switch between them easily. For this purpose we use the extension-point mechanism of Eclipse and define the following extension point (all documentation elements are removed for readability):

<schema targetNamespace="..." xmlns="...">
   <element name="extension">
      ...
      <complexType>
         <sequence><element ref="moduleConfiguration" minOccurs="1" maxOccurs="unbounded"/></sequence>
         <attribute name="point" type="string" use="required"></attribute>
         <attribute name="id" type="string"></attribute>
         <attribute name="name" type="string"></attribute>
      </complexType>
   </element>
   <element name="moduleConfiguration">
      <complexType>
         <attribute name="moduleClassname" type="string" use="required">
            <annotation>
               <appInfo><meta.attribute kind="java" basedOn="com.google.inject.AbstractModule:"/></appInfo>
            </annotation>
         </attribute>
         <attribute name="priority" type="string" use="required"></attribute>
      </complexType>
   </element>
</schema>

Using this definition, the plug-in can extend the main plug-in, by providing moduleConfigurations, which is a class name of the class extending the Guice AbstractModule and the priority. Using the following utility class, the module configurations can be read:

public class PluginUtility {

	public static TreeMap<Integer, AbstractModule> getModuleConfigurations() throws CoreException {
		final TreeMap<Integer, AbstractModule> moduleConfigurations = new TreeMap<Integer, AbstractModule>();
		IExtension[] moduleConfigurationExtensions = Platform.getExtensionRegistry().getExtensionPoint("...id...").getExtensions();
		for (IExtension moduleConfiguration : moduleConfigurationExtensions) {
			for (IConfigurationElement configElement : moduleConfiguration.getConfigurationElements()) {

				AbstractModule module = (AbstractModule) configElement.createExecutableExtension("moduleClassname");
				String priorityAsString = configElement.getAttribute("priority");
				int priority = 0;
				try {
					priority = Integer.parseInt(priorityAsString);
				} catch (NumberFormatException e) {
					throw new CoreException(...);
				}

				moduleConfigurations.put(Integer.valueOf(priority), module);
			}
		}
		return moduleConfigurations;
	}
}

Using this utility, the main plug-in can read in available AbstractModules available in runtime and configure Dependency Injection. Before the usage of InjectorHolder this should be configured. We use higher priority (bigger number) as a reason to select the AbstractModule.

public class InjectorHolder {
...
	private Injector injector;

	public static InjectorHolder getInstance() {
		if (instance == null) {
			instance = new InjectroHolder();
			injector = Guice.createInjector(PluginUtility.getModuleConfigurations().lastEntry().getValue());
		}
		return instance;
	}
...
}

Finally, the two binding modules should use the extension-point. The plug-in containing the business delegates should define the module configuration with a “standard” priority:

   <extension
         point="....ModuleConfiguration" name="ModuleConfiguration">
      <moduleConfiguration
            moduleClassname="....ServiceFacadeModule"
            priority="1">
      </moduleConfiguration>
   </extension>

The mock plugin should define a higher priority, which would win against the business delegate, if included into release.

   <extension
         point="....ModuleConfiguration" name="ModuleConfiguration">
      <moduleConfiguration
            moduleClassname="....ServiceMockModule"
            priority="10">
      </moduleConfiguration>
   </extension>

Summary

In this article, an implementation approach for business delegates and service locator patterns is shown. Usage of Google Guice Dependency Injection framework allows for a flexible resolution of dependency in client code. Since it doesn’t support multiple binding configurations, we introduce a self-defined extension point, which allows to register different DI-Configuration modules and assign different priorities to them. In addition, we use the ability of Eclipse to define and use optional feature, to foster runtime-based configuration. Using different “Run Configurations”, you can start the RCP client with different implementation of your business services. If the mock plug-in is included, its higher priority will win against the business delegates. Therefore the development of the client can be performed using mock objects instead of real business delegates without any additional configuration.

Have Fun…


You’ve definitely heard about Xtext, the famous text modeling framework, community award winner . We are all looking forward to the new project management wonder: the release of Helios, upcoming on June the 23rd, which will include Xtext 1.0.0. In this article, I want do describe some aspects of integration of Xtext-based languages into IDE. continue reading…

Yesterday, I discovered a funny nuance in Java programming language, which I didn’t know before and decided to share it with you. I was designing an API for transport of changes in relationships between two DTO types. Since I wanted to support batch changes, I created the class for carrying these:

class ManyToManyDelta<S extends BaseDto<?>, T extends BaseDto<?>> {

  List<SimpleRelationship<S,T>> relationshipsToAdd;
  List<SimpleRelationship<S,T>> relationshipsToRemove;
  ...
}

class SimpleRelationship<V extends BaseDto<?>, W extends BaseDto<?>> {

  // BaseDto classes are identified by the parameterized Id
  Id<V> left;
  Id<W> right;

  SimpleRelationship(BaseDto<V> one, BaseDto<W> another) {
    left = one.getId();
    right = another.getId();
  }
}

Having this structure, you can model the relationship between two instances of types A and B by an instance of SimpleRelationship<A>. If you want to communicate the creation of a relationship you would put the latter into the relatioshipToAdd list, if you want to model the deletion, you would put it into the relatioshipToRemove list.

Now I it was time to develop methods for access of the relationship lists inside of the ManyToManyDelta:

class ManyToManyDelta<S extends BaseDto<?>, T extends BaseDto<?>> {
  ...
  public void add(SimpleRelationship<S, T> toAdd) {
    if (toAdd == null) { /* react */}
    this.relatioshipToAdd.add(toAdd);
  }
  ...
}

You could think that you have a batch update (e.g. an Array or List) of SimpleRelatioship objects you would like to add them by one invocation instead of a series of invocation. e.G:

class ManyToManyDelta<S extends BaseDto<?>, T extends BaseDto<?>> {
  ...
  public void add(SimpleRelationship<S, T>[] toAdd) {
    if (toAdd == null) { /* react */}
    this.relatioshipToAdd.addAll(Arrays.asList(toAdd));
  }
  public void add(SimpleRelationship<S, T> toAdd) {
    if (toAdd == null) { /* react */}
    this.relatioshipToAdd.add(toAdd);
  }
  ...
}

Using the varargs feature of Java you could also write equivalent:

class ManyToManyDelta<S extends BaseDto<?>, T extends BaseDto<?>> {
  ...
  public void add(SimpleRelationship<S, T>... toAdd) {
    if (toAdd == null) { /* react */}
    this.relatioshipToAdd.addAll(Arrays.asList(toAdd));
  }
  ...
}

That would be nice, right? By the way, it is a good idea, to write some client code, during the development of API. This discovers potential problems:

  ...
  A entityA = ...;
  B entityB = ...;
  ManyToManyDelta<A, B> delta = new ManyToManyDelta<A,B>();
  delta.add(new SimpleRelationship<A,B>(entityA, entityB));

Coding this result in a type safety warning: A generic array of SimpleRelationship is created for a varargs parameter. Which reveals a problem in a Java language: you can not create an array of parameterized types. And resulting from this fact, you can not use that as varargs argument.

Finally, if you want to create convenience methods for one and many items, you have to do it in a old-fashined way, by providing overloaded methods.