Skip to content

Archive

Category: Eclipse

If you are interested in Xtext and its new features introduced in the upcomming version 2.0 you might want to install and try them out. Since it will be officially realeased together with Eclipse Indigo, you have to execute some manual steps. In order to be able to install the new feature, you will require to enter two additional update sites into you update manager and download the update site containing xtext itself. The following steps worked for me:

Thanks to Dennis Huebner for the hints….


JFace Databinding enables an easy binding between values inside of data models and SWT/JFace widgets. No more boring listeners to implement – just create observables and connect them using the data binding context. There are several brilliant articles written about it. My favorites are those from Ralf Ebert and Lars Vogel.

One of the interesting aspects of databinding is data validation. The update strategies, responsible for propagation of changes in models or in widgets can be supplied with validators, making sure that the data changes are legal. In the same time the JSR-303 Bean Validation specification focuses on a modern standardized way of data validation. In this post, I combine these subjects and use JSR-303 in JFace Databinding Validators.

One of the core insights of the JSR-303 is the idea of annotation of data validation constraints on data itself. It is indeed a good observation, that validation code strongly relies on the data structure and semantics. To follow this idea consequently, the application developer should care of validation during implementation of business logic as less as possible. A much better idea is to encapsulate the entire validation into domain-specific types. Let me demonstrate it by example, imagine the following class:

public class Customer {
  private String name;
  private String address;
  private String zip;
  private String city;
}

This is perfectly reasonable, but now consider not only the data storage/transport aspects, but also the validation aspects. A standard approach would be to use the following validator logic, in the databinding:

public class CustomerComposite {
[...]
  public void bindValues(Customer model, DataBindingContext dbc) {
    UpdateValueStrategy m2t = new UpdateValueStrategy();
    m2t.setAfterGetValidator(new IValidator() {
      @Override
      public IStatus validate(Object value) {
        String name = (String) value;
        if (name == null || Helper.isRegex(name, "[A-Za-z -]*")) {
          return ValidationStatus.error("Wrong name");
        }
          return ValidationStatus.ok();
        }
      });
    dbc.bindValue(WidgetProperties.text(SWT.Modify).observe(namefield),
      BeanProperties.value(Customer.class, "name").observe(model),
      new UpdateValueStrategy(), m2t);

    m2t = new UpdateValueStrategy();
    m2t.setAfterGetValidator(new IValidator() {
      @Override
      public IStatus validate(Object value) {
        String zipCode = (String) value;
        if (zipCode == null || zipCode.length() > 5 || zipCode.length() < 5 || Helper.isRegex(zipCode, "[0-9]*")) {
          return ValidationStatus.error("Wrong zip code");
        }
          return ValidationStatus.ok();
        }
      });
    dbc.bindValue(WidgetProperties.text(SWT.Modify).observe(zipfield),
      BeanProperties.value(Customer.class, "zip").observe(model),
      new UpdateValueStrategy(), m2t);
  [...]
  }

Pretty much code, and rememeber that JFace Databinding code like this can not be reused in other parts of the application. Let’s put the validation logic on the data declaration in a way how JSR-303 proposes to do this:

public class Customer {
  @NotNull
  @Pattern(regexp = "[A-Za-z -]*")
  private String name;
  private String addressLine;
  @Size(min=1, max=5)
  @Pattern(regexp = "[0-9]*")
  private String zip;
  @NotNull
  @Pattern(regexp = "[A-Za-z -]*")
  private String city;
}

As a next step, let us develop an update strategy factory which create update strategies with embedded Validator for JSR-303 Bean Validation constaints.

public class BeanValidator implements IValidator {
  private ValidatorFactory factory = Validation.buildDefaultValidatorFactory();

  @Override
  public IStatus validate(Object value) {
    Set<ConstraintViolation<Object>> violations = factory.getValidator().validate(value,
      new Class<?>[] { Default.class });
    if (violations.size() > 0) {
      List<IStatus> statusList = new ArrayList<IStatus>();
      for (ConstraintViolation<Object> cv : violations) {
        statusList.add(ValidationStatus.error(cv.getMessage()));
      }
      return new MultiStatus(Activator.PLUGIN_ID, IStatus.ERROR,
        statusList.toArray(new IStatus[statusList.size()]), "Validation errors", null);
    }
    return ValidationStatus.ok();
  }
}

public class StrategyFactory {
 public static UpdateValueStrategy getStrategy() {
   UpdateValueStrategy strategy = new UpdateValueStrategy();
   strategy.setAfterConvertValidator(new BeanValidator());
   return strategy;
 }
}

Using the StrategyFactory, the validation code inside of the composite becomes trivial:

public class CustomerComposite {
[...]
  public void bindValues(Customer model, DataBindingContext dbc) {
    dbc.bindValue(WidgetProperties.text(SWT.Modify).observe(namefield),
      BeanProperties.value(Customer.class, "name").observe(model),
      new UpdateValueStrategy(), StrategyFactory.getStrategy());

    dbc.bindValue(WidgetProperties.text(SWT.Modify).observe(zipfield),
     BeanProperties.value(Customer.class, "zip").observe(model),
     new UpdateValueStrategy(), StrategyFactory.getStrategy());
 [...]
}

An important property of the introduced validation approach is the fact, that it can be reused in other application layers (e.G. in service layer, or in data access layer). In other words you can use the same validation logic across the entire application and just remain valid…

Google Guice

Abstract

Development of Eclipse RCP as a rich client of the multi-tier Java Enterprise application becomes an an interesting alternative to other frontend technologies. An important aspect is the ability to develop and test the frontend independent from the backend. In this article, an approach for testing and de-coupling of server and client development in Eclipse RCP is introduced.

Business Delegate, Service Locator and Dependency Injection

In a Java multi-tier application, the business logic is implemented in form of server-hosted components (EJB, Spring Beans, OSGi Services, etc…). In this example, the EJB Backend is used, but it can be easily replaced with other technologies mentioned previously. A rich client is connected to the server using some remoting technology and contains a local storage for the client-specific state, which allows to build more complex and reactive applications. A common approach to hide the aspects of remote invocations on the client side is the use of Business Delegate enterprise design pattern. My favorite way of implementing it is to define the technology-independent business interface (POJI = Plain Old Java Interface) and implement it on the server side by the server beans and on the client side by the business delegates. This article uses the following business interface as example:

public interface MyBusinessService extends BaseBusinessService {

	/**
	 * Full qualified name of the interface (used for Binding).
	 */
	String IF_FQN = "....MyBusinessService";

	/**
	 * Does something on server.
	 *
	 * @param parameter Parameter of invocation.
	 * @return Result of execution.
	 */
	List<OperationResultDto> doSomeStuff(ParameterDto parameter);
}

The delegates make use of the Service Locator design pattern. Here is an example, how the implementation of the Base Facade can look like, which is a superclass of all business delegates:

public abstract class BaseFacade {
...
	/**
	 * Delegates a call to a stateless session bean on the server.
	 *
	 * @param <T> type business interface
	 * @param iterfaze business interface
	 * @param jndi binding on the server
	 * @return result of invocation
	 */
	public static <T> T delegate(Class<T> iterfaze, String jndi) {
		return Activator.getDefault().getServiceLocator().getStateless(iterfaze, jndi);
	}

	/** ... */
	public static void debug(String message) {...}
}

The ServiceLocator.getStateless() method is hiding the lookup of the remote EJB. Using the BaseFacade, the business delegate looks as following:

public class MyBusinessServiceFacade extends BaseFacade implements MyBusinessService {

	public List<OperationResultDto> doSomeStuff(ParameterDto parameter) {
		debug("entering doSomeStuff(ParameterDto)");
		final List<OperationResultDto> result = delegate(MyBusinessService.class, MyBusinessService.IF_FQN)
				.doSomeStuff(parameter);
		debug("leaving doSomeStuff(ParameterDto)");
		return result;
	}

}

This setup results in a following architecture:

Architecture

Business Delegate Boilerplate

The setup looks good in theory, but in fact it is pretty boring to program on the client side. You can reduce the effort of creating business delegates (in fact I use MDSD techniques and Xtext to generate it), but in every place a service is required, the business delegate is instantiated directly. The approach works, but it just not nice, because you reference the implementation directly.

A common approach to avoid writing the code of direct instantiation is the usage of Dependency Injection frameworks. A very popular one is Google Guice, which is used in this article. The essential idea of Google Guice is to configure the binding between the dependency and its resolution and use Google Guice as a kind of factory to create instances and inject dependencies in it. For the creation of the binding, Guice offers a class AbstractModule to subclass from.

public class ServiceFacadeModule extends AbstractModule {

	/**
	 * @see com.google.inject.AbstractModule#configure()
	 */
	@Override
	protected void configure() {
		bind(MyBusinessService.class).to(MyBusinessServiceFacade.class);
	}
}
...
public class InjectorHolder {
...
	private Injector injector;

	public static void configureInjector(AbstractModule module) {
		InjectorHolder.getInstance().setInjector(Guice.createInjector(module));
	}

	/**
	 * Creates an instance of a class.
	 *
	 * @param <T> type of the class
	 * @param clazz type to create
	 * @return a instance with injected dependencies
	 */
	public static <T> T get(Class<T> clazz) {
		return getInstance().getInjector().getInstance(clazz);
	}
...
}

In order to hide references to Guice classes in client code, the DI can be encapsulated inside of a InjectorHolder, which acts as factory for instances with service references:

/**
 * Class with a reference.
 */
public class DataSource {

	/**
	 * Reference to the service.
	 */
	private MyBusinessService myBusinessService;

	@Inject
	public void setMyBusinessService(MyBusinessService myBusinessService) {
		this.myBusinessService = myBusinessService;
	}
}
/**
 * Client which requires the data source with injected references.
 */
public class MyView {

	/**
	 * Let Google Guice create the instance and inject dependencies.
	 */
	private DataSource source = InjectorHolder.get(DataSource.class);
}

Please note, that the data source is using setter-injection for the service implementations and the InjectorHolder as a factory to create an instance of data source with injected reference.

Packaging Guice

After this short introduction of Guice, it is time to package this 3rd-party library into the Eclipse RCP client. In fact it is all about putting the JARs (guice-2.0.jar, aopalliance-1.0.jar) into some folder inside of the client plug-in and modifying the MANIFEST.MF so that the JARs are on the bundle class-path and the packages are listed as “exported”.

What about mock?

After the client has the ability to use business delegates it can access the business functionality of the server. In fact this requires that the server is already implemented. In order to decouple the client from the server development, mocks can be used. Mocks are popular in context of Unit tests, but can be used to simulate behavior of server implementation as well. Since mocks should not be delivered into production it is a good idea to put them into a separate mock plug-in, included into the special mock feature. The mock plug-in should export its packages. These should be imported by the main plug-in, instead of defining of a dependency on the mock plug-in directly. The mock feature is included in the product / top-level feature as an optional feature. This specific configuration allows the main plug-in to instantiate classes from the mock plug-in, if this is delivered, but doesn’t produce errors if the mock plug-in is not included into release.

Since the business delegate implementes the business interface, its mock should also do so:

public class MyBusinessServiceMock implements MyBusinessService {

	public List<OperationResultDto> doSomeStuff(ParameterDto parameter) {
		debug("entering doSomeStuff(ParameterDto)");
		final List<OperationResultDto> result = new ArrayList<OperationResultDto>();
		result.add(new OperationResult(parameter.getValue()));
		debug("leaving doSomeStuff(ParameterDto)");
		return result;
	}
...
}

Since the mocks should also be injected by Guice, we define the binding module as well.

public class ServiceMockModule extends AbstractModule {
	protected void configure() {
		bind(MyBusinessService.class).to(MyBusinessServiceMock.class);
	}
}

Finally, we got two implementations and two Guice’s AbstractModule implementation binding them. The last missing piece is the dynamic configuration which allows to switch between them easily. For this purpose we use the extension-point mechanism of Eclipse and define the following extension point (all documentation elements are removed for readability):

<schema targetNamespace="..." xmlns="...">
   <element name="extension">
      ...
      <complexType>
         <sequence><element ref="moduleConfiguration" minOccurs="1" maxOccurs="unbounded"/></sequence>
         <attribute name="point" type="string" use="required"></attribute>
         <attribute name="id" type="string"></attribute>
         <attribute name="name" type="string"></attribute>
      </complexType>
   </element>
   <element name="moduleConfiguration">
      <complexType>
         <attribute name="moduleClassname" type="string" use="required">
            <annotation>
               <appInfo><meta.attribute kind="java" basedOn="com.google.inject.AbstractModule:"/></appInfo>
            </annotation>
         </attribute>
         <attribute name="priority" type="string" use="required"></attribute>
      </complexType>
   </element>
</schema>

Using this definition, the plug-in can extend the main plug-in, by providing moduleConfigurations, which is a class name of the class extending the Guice AbstractModule and the priority. Using the following utility class, the module configurations can be read:

public class PluginUtility {

	public static TreeMap<Integer, AbstractModule> getModuleConfigurations() throws CoreException {
		final TreeMap<Integer, AbstractModule> moduleConfigurations = new TreeMap<Integer, AbstractModule>();
		IExtension[] moduleConfigurationExtensions = Platform.getExtensionRegistry().getExtensionPoint("...id...").getExtensions();
		for (IExtension moduleConfiguration : moduleConfigurationExtensions) {
			for (IConfigurationElement configElement : moduleConfiguration.getConfigurationElements()) {

				AbstractModule module = (AbstractModule) configElement.createExecutableExtension("moduleClassname");
				String priorityAsString = configElement.getAttribute("priority");
				int priority = 0;
				try {
					priority = Integer.parseInt(priorityAsString);
				} catch (NumberFormatException e) {
					throw new CoreException(...);
				}

				moduleConfigurations.put(Integer.valueOf(priority), module);
			}
		}
		return moduleConfigurations;
	}
}

Using this utility, the main plug-in can read in available AbstractModules available in runtime and configure Dependency Injection. Before the usage of InjectorHolder this should be configured. We use higher priority (bigger number) as a reason to select the AbstractModule.

public class InjectorHolder {
...
	private Injector injector;

	public static InjectorHolder getInstance() {
		if (instance == null) {
			instance = new InjectroHolder();
			injector = Guice.createInjector(PluginUtility.getModuleConfigurations().lastEntry().getValue());
		}
		return instance;
	}
...
}

Finally, the two binding modules should use the extension-point. The plug-in containing the business delegates should define the module configuration with a “standard” priority:

   <extension
         point="....ModuleConfiguration" name="ModuleConfiguration">
      <moduleConfiguration
            moduleClassname="....ServiceFacadeModule"
            priority="1">
      </moduleConfiguration>
   </extension>

The mock plugin should define a higher priority, which would win against the business delegate, if included into release.

   <extension
         point="....ModuleConfiguration" name="ModuleConfiguration">
      <moduleConfiguration
            moduleClassname="....ServiceMockModule"
            priority="10">
      </moduleConfiguration>
   </extension>

Summary

In this article, an implementation approach for business delegates and service locator patterns is shown. Usage of Google Guice Dependency Injection framework allows for a flexible resolution of dependency in client code. Since it doesn’t support multiple binding configurations, we introduce a self-defined extension point, which allows to register different DI-Configuration modules and assign different priorities to them. In addition, we use the ability of Eclipse to define and use optional feature, to foster runtime-based configuration. Using different “Run Configurations”, you can start the RCP client with different implementation of your business services. If the mock plug-in is included, its higher priority will win against the business delegates. Therefore the development of the client can be performed using mock objects instead of real business delegates without any additional configuration.

Have Fun…


You’ve definitely heard about Xtext, the famous text modeling framework, community award winner . We are all looking forward to the new project management wonder: the release of Helios, upcoming on June the 23rd, which will include Xtext 1.0.0. In this article, I want do describe some aspects of integration of Xtext-based languages into IDE. continue reading…

The Eclipse RCP became a prominent platform for building client software. One of the delivery mechanisms supported by Eclipse RCP is Sun’s Java Web Start (JWS). Since Galileo Edition some changes has been introduced in the platform. This article provides some hints for creation of the RCP delivered by Java Web Start.

Packaging

In order to package the RCP I suggest to use feature-based products as described in a previous article. Following it, you should have a top-level plug-in (also refered as product-defining plug-in) and top-level feature, which is called “wrap”-feature in the context of the Java Web Start.

Exporting the product

Before you start with Java Web Start (JWS), export the product and make sure it starts as a standalone application. In doing so, you have to ensure that your references to the plug-ins are correct. One of the way of doing it is to hit the Validate button in the top left of the product editor. If the validation is successful, try to export the product. The PDE builder will run and create a distribution. The errors of the compiler/builder/assembler, if any, are reported to files zipped to the logs.zip file in the distribution directory.

EclipseDemoCamp An event of annual series of Eclipse Demo Camps is taking place in Hamburg again. The event was planned in November, but takes actually place in December. As usually Peter and Martinare responsible for the organization. To make it short:

If you want to attend, make sure you find a minute to write you name down in EclipseWiki. I suppose these kind of events is well-known. If you never heard of that – look at the interesting topics and the attendee list of more than one hundred people. You will have the opportunity to listen to the talks, to speak with interesting people and get some news from Eclipse Commiters and Users. In the end you usually get some food and bevereges, to make the atmosphere a little more relaxed. If you never been there it is worth to visit…

Even if some time has passed since the events EWiTa 2009 and Eclipse Summit Europe 2009, I would like to share my impressions, since I took part in both events…

EWiTa 2009

_MG_9975 EWiTa 2009 stands for Elmshorner Wirtschaftsinfromatiktag, that is German for “Elmshorn Business Information Systems Day”. The event has been organized by Frank Zimmermann, of Nordakademie – a private university in Northern Germany. Even if the event is not an official sequel of the MDSD Today, there were many similarities. The event had two tracks: the process modeling track and the MDSD track. After an excellent keynote from Mathias Weske about the importance of collaboration during the process of (business) modeling I stayed in the business track to listen to the Andrea Grass ( oose GmbH) on the combination of UML and BPMN 2.0. To say the truth, I’m not a big fan of this approach, especially, because the conceptual mismatch of modeling of business behavior and technical behavior. After a coffee break I enjoyed an excellent talk of itemis), reporting about the success story of xText in a big project of Deutsche Börse AG (German Stock Exchange).

After a small lunch, I was listening to two Arcando consultants reporting about their eTicketing project. The strange thing about this talk was that they just made some ads on a standard Microsoft product. After this, I enjoyed an interesting talk on business modeling based on CobIT process. Finally, I switched the track again to MDSD and listend to the an interesting usage of MDSD techniques for generation of DynPro and ABAP code. simon.zambrovski - View my 'EWiTa 2009' set on Flickriver

In general I enjoyed the event. I think the MDSD track was a little more technical, but the combination was good.

Eclipse Summit Europe 2009

_MG_0117The Eclipse Summit Europe 2009 (ESE 2009) took place on October 27-29 in Ludwigsburg, Germany, it is the European complement to the EclipseCon in the US. In contrast to the spring event in Santa Clara, CA, the ESE is an autumn event in a beautiful baroque town near Stuttgart. The event lasted three days and is a must for Eclipse-related technology people. As usual, the venue was great, the keynotes excellent and the talks interesting. And of course it was the place to meet the committers, evangelists, see them in action, talk to them and discuss the future directions.

Symposium Day

The first day is an arrival day. People arrive during the day, some of them are already there. I was visiting the Modeling Track the whole day and had much fun with Ed Merks, Eike Stepper and Thomas Schindl in the morning. Later, in the Modeling Symposium, Eike showed the eDine RCP based on CDO, UBS envisioned the modeling tool pipeline and so on, and so on. About 10 people showed different technologies on and about modeling. Intersting, unstructured and relaxed. And of course, the first evening is the opportunity to speak with all the Eclipse VIPs and drink a cold beer.

First Day

The main conference day was Wednesday and it started with a great keynote on functional programming held by Don Syme, the father of F#. Suprisingly, the talk was about F#. For some of us, there was not enough functional beauty exposed in the talk, so I scheduled a private session with Don and he told Markus Voelter, Heiko Behrens and me about some interesting F# features._MG_0090

I took part in the How about I/O session on JPicus. A very interesting tool for tracking I/O problems in Java programs developed by SAP. The Climb The Tower of Babel was about the Eclipse translation project. Intersting is the runtime editor allowing you to translate you runnig application. After a delicios lunch, I enjoyed two modeling talks: Xtext and EMF Query. The itemis team introduced some really new features, which make Xtext in my oppinion to a unique technology. Just to mention few of them: white-space aware parsing, usage of scopes and qualified names, usage of index (construted by a builder) in your own language, separation of markers and annotations in the editor, integration of the generator on-save, declarative quick-fix in your DSL, strings with special meaning, references to java types, and much more… The EMF Query is a project developed by the SAP team, that leverages the index by a query language. The language is a SQL-like DSL for querying the EMF-based models. The infrastructure is very intersting and allows complex scenarios with multiple model providers – very technical, and I believe, very interesting project.

Second Day

_MG_0171
After the keynote on the importance of software ecosystems and a deep economical analysis of Eclipse ecosystem, I switched off the track to be able to prepare my talk. I was reporting about the IDE for TLA+ which I was building the last nine month at Microsoft Research, and which will be available soon. The main emphasis of the talk, was not the demo of the IDE, but the exchange of experiences on building one. Especially, I focused on the possible pitfalls and conceptual mismatches of IDEs depending on the integrated language. The slides will be available soon.

At the end, I enjoyed the event very much. I even liked it more than EclipseCon. Modeling still seems to be the most interesting part of Eclipse ecosystem. Technologies like Xtext and CDO gain maturity, new technolgoes like EMF Query are being developed. It was nice to see the people again… As usual, some pictures:
simon.zambrovski - View my 'Eclipse Summit Europe 2009' set on Flickriver