Jun 22 2015

Cassandra Errors: The UnSet Upsert

During a late night coding session I got the following trace from the Datastax Cassandra Java Driver:

com.datastax.driver.core.exceptions.SyntaxError: line 1:36 mismatched input 'WHERE' expecting K_SET (UPDATE my_table [WHERE] id...)
at com.datastax.driver.core.Responses$Error.asException(Responses.java:101) ~[cassandra-driver-core-2.1.5.jar:na]
at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:140) ~[cassandra-driver-core-2.1.5.jar:na]
at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:293) ~[cassandra-driver-core-2.1.5.jar:na]
at com.datastax.driver.core.RequestHandler.onSet(RequestHandler.java:455) ~[cassandra-driver-core-2.1.5.jar:na]
at com.datastax.driver.core.Connection$Dispatcher.messageReceived(Connection.java:734) ~[cassandra-driver-core-2.1.5.jar:na]
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) ~[netty-3.10.1.Final.jar:na]

Ok. Let’s think it through. Did we not put a key column in the WHERE clause? Nope. Did we put a non-key column in the WHERE clause? Nope. Did we put a key column in the SET clause? Nope. OK, what then?

Well… my code has many “setIfNonNull(…)” helpers. Turns out my testing dataset had all nulls. Thus, nothing was being set! Obviously an UPDATE must update something (though the lines are blurred in C* Upsert land).

So, this cryptic message from C* should really read “Update statement missing SET clause, you fool.” Now if I could find it in the source, I’d submit a PR.

 


Feb 28 2015

Don’t use Git to Deploy Code

github_down

Not again GitHub! Save us Muscular Failure Unicorn!

Just don’t. If you can’t reason why, please stop developing code critical to your business.

When it works, it works

Git is great. Push here, pull there. It works so well that you might be fatuously convinced it’s the perfect too to deploy code to production. Even worse, it might appear to work well at this task, further reinforcing your choice  However, the Achilles Heel of any DVCS is your origin provider. Let’s say that BitBucket has borked their database for the 4th time this month or GitHub is suffering yet another DDNS attack. Then we see posts opining about failed Git based app deployments.

When shit goes wrong, things get complicated

Now shit’s gone wrong. No worries, there must be a more complicated way to solve what appeared to be a simple workflow. We’ve got all these Unix cli tools and can bodger something together. I think I can just scp the files over. Wait, better rsync them, I’m not sure exactly which ones changed. Arr… so many flags, do I want to compare file checksums or timestamps? Maybe I’ll tarball up everything and push it over to the servers. What was the command string again to untar and ungzip? Crap, I included my file permissions and they don’t work on the server. Huh, how was I supposed to know the code stored running PID’s in various files sprinkled throughout the source? WTF, someone tweaked some of those settings files server side and I just overwrote them. Fuck… I made backup of that server directory before I started, right? Alright, Hail Mary time, I’ll just export my remote repo and import it as a different origin on the server. How the hell do I do that?

Shit goes wrong at the wrong time

BitBucket's Server Down Page

No knowledge of HTTP codes necessary at BitBucket.

The above might be a fun exercise on the QA server when it’s 3pm and everyone’s in the office on a slow Tuesday, but that’s not how these things unfold. Nope. What will really happen is there is a hotfix that needs to go out and got assigned to the intern, because he needs the experience, you know. And because he’s the only guy on call during Thanksgiving since everyone else is away on vacation. But now he’s riding the wheels off this Rube Goldberg machine, getting both hands stuck in the tar pit and only working himself deeper as he borks the entire production setup and your site is down for the count at 2am on Black Friday.

Special snowflake servers

Git checkouts to update code encourages Special Snowflake servers. Each server is a unique artisan crafted piece of Unix art. Like literal snowflakes, no two are the same. No one really understands how it all works and the little documentation that exists was last updated in the Bush administration.  Running `git status` shows lots of little file changes to get things just right on each machine, some versioned, some not, so no one has had the balls to `get reset –hard` for years now.

fail_bitbucket

Satanic BitBucket Logo of Doom

The better, deterministic way

Deploy your code as a self contained distributable. In Java we’ve got War and Ear files. In Play Framework we’ve got binary distributables you unzip and run. ASP.Net can be packaged, just like Ruby and many others. They’re like MRE‘s, but you just zip them, no need to add water. You don’t care what version of Scala is running on the server host, whether the proper DLL is loaded, or if you’re on the proper Ruby release. It Just Works™. When shit’s broken and your customers are screaming on Twitter, you want your code to Just Work.

Distributing the distributables

“The distributable is huge!” you warn. Sure, 78MB won’t fit on a floppy, but we’ve got 10G server interconnects, I think we’ll be OK. “But where will we server those from,” you say, still unconvinced. How about AWS S3, with 11 nines durability (99.999999999). Or, you can setup your own Open Stack Swift object store if you’d prefer.

The process is simple:

  1. Built and unit/integration test a commit on CI
  2. Push passing build distributable to S3
  3. Your deploy script on server stops app, downloads from S3, starts app

If S3 is down (better take some real MRE’s into the basement, the end is near), you either:

  1. Download the distributable artifact from CI and scp it to the server
  2. If CI and S3 are down, build locally and scp it to the server

The point is to have a canonical way to turn an explicit state of source (i.e. checkout hash) into an binary that will consistently run as desired where you deploy it. No chasing thousands of source files. No trying to compile all the source on your workstation, and on your CI, and on your front end servers. Fight entropy. Choose determinism.

Other Reasons

File contention

Do you work in one of those scripting languages? Say PHP, Ruby, or Python. Ever had your SCM fail to update files because of open file pointers to running or zombie threads? Prepare yourself for some possible non-deterministic behavior when you deploy these apps via Git. It’ll be best to add some pgrep steps to run around and kill off possibly offensive threads, but then you’ve got to be asking yourself, “what life choices did I make that have lead me to run around killing myriad threads on each deploy?”

SCM’s worse than git

Git works pretty well, but what if you’re deploying with another SCM like SVN. God help you, friend. The change databases that back your local SVN checkout can get corrupted in wondrous ways. The net result can be that SVN says you’re on revision X, and `svn status` shows no files are changed locally. When you call `svn update` or checkout the target revision, you’re told you’re already up to date… but you’re not. This is true FML territory. If your SCM cannot reliably track changes, it should be cast to special circle in hell. Sadly, I’ve personally seen this happen three times in a single year. God help you, friend.


Nov 4 2014

Dynatrace Memory Sensor Anti-Patterns

This is a collection of Dynatrace actions I’ve learned to avoid. I’ll add more as my Dynatrace journey evolves.

Judiciously Apply Memory Sensors

If one Memory Sensor is good, then many must be great? Sadly not. Memory sensors are useful as they allow Selective Memory Snapshots to be taken of a subset of the heap graph without taking an entire heap dump. They are delightfully fast and light, but too many will spoil the party.

Any scientific instrument will perturb the system under measure during the act of mensuration. With Dynatrace, applying Memory Sensors to core services (singletons) only incurs an ~1ms increase in initialization due to byte code instrumentation and is not a problem. However, if the instrumented object is a core type which is created and garbage collected often, the effects can be shocking.

Let’s look at adding memory sensors to two core types in my application, LocalDate and Money. The application was creating millions of date objects for a certain batch job as well as money objects. I wanted to see how much heap was consumed by them, so I instrumented these objects with Memory Sensors. Suddenly, the application began to crawl.

Below we see the new application Hot Spot Methods. The init of LocalDate objects takes nearly 4 minutes. Similarly the BigDecimals inside the Money objects are consume an inordinate amount of time. This job is a database bound job, but here we see the database calls are only 0.3% of the hot spot methods. The Dynatrace instrumentation of these memory sensors is to blame, which might be a surprise as the memory sensors are not even creating Pure Paths and we’re not taking any Selective Memory Snapshots.

localDate

Initialization of common objects is consuming 99.7% of job runtime (200 of 3200 job PurePaths shown)

After the Memory Sensors are removed, we see that query execution dominates the job runtime, as expected. We also see that LocalDate instantiation has dropped from 4 minutes to 20ms (too small to appear in Hot Spots report below). The moral of the story? The Memory Sensors on LocalDate increased it’s initialization time 12,000 times!

noLocalDate

12000x faster LocalDate init w/o Memory Sensors (3200 of 3200 job PurePaths shown)

The CPU and garbage collection times were also  dilated notably by the wanton application of Memory Sensors. Below we can see that GC time is magnified ~14x and CPU consumption more than doubled to 93% from 41%. Note that the Memory Sensor case below is truncated as the sensors were removed via Hot Placement during the job, otherwise it would ostensibly have taken forever to run.

gcAndCpuDifference

Memory Sensors also double CPU consumption and GC time

The moral of the story? Always instrument the bare minimum necessary items with Dynatrace. All measurements have overhead and perturb the code under test. The greater the instrumentation, the greater the perturbation. You want to be confident that the trends you discover in Dynatrace are due to the code under test, rather than an artifact of the instrumentation.

 

 


Nov 2 2014

Never Use Glassdoor From Work

Asymmetric Information

whereIsTheSSL

Can you find the SSL in this URL? Hint, you won’t.

Glassdoor is a great idea. Add transparency to the job market by making salaries, interview details, and internal company reviews public. This is information that employers intensely attempt to keep private, despite its dissemination being totally legal. One trick many companies use is telling you that you cannot do something that you legally can do. For example, saying a user cannot do something in the Terms of Service, despite the courts ruling ToS unenforceable. Or, at bonus and raise time, telling employees that they cannot discuss their pay, even though the courts have declared employers cannot take recourse to prevent salaries discussions.

Glassdoor breaches this carefully crafted bastion of asymmetric information wide open. However, they allow fail to use SSL on the site! That’s right, when you look around for a new job, perusing salaries, or when you write up a review that’s brutally honest about your firm, Glassdoor sends those cookies and content around the corporate network with no encryption at all.

Malicious or Stupid?

SSL used to be complicated, a decade ago. Welcome to the 21st century. Your Facebook posts, Tweets, and cat videos are all protected by encryption, but your clandestine Glassdoor interactions are not. I’ve twice written their engineering team about it to only be brushed off. Despite Snowden revealing that everyone is listening to the wire, Glassdoor does not make any attempt to protect that information. However, it’s not a foreign government you have to fear as many employers are using network appliances to monitor packets. I’ve no doubt a vendor presently let’s you run a report of who has been posting to Glassdoor from within the network, and given the unsecured cookies, they could easily peruse the site as that employee, discovering everything posted from their account. This might sound far fetched, but employers have long been using automated MITM attacks to intercept employee traffic.

Of note, Glassdoor runs off CloudFlare. CloudFlare makes SSL trivial to enable, but that would mean that Glassdoor couldn’t use the FREE account. That’s right, Glassdoor would actually have to pay money to run a site that sells advertising and job postings. What a horrible pain to endure. Think of the Subway sandwich worth of SSL fees they would rack up each day.

Browse Securely

Until the IT team at Glassdoor decides to spend a few dollars a to implement SSL like everyone else in the industry, make sure you stay away from the site on any shared or corporate link, unless you want to let your benevolent HR department know your deepest thoughts about them.

 


Oct 12 2014

Using Immutable Objects with MyBatis

Immutable is Beautiful

I’m a fan of immutable objects. We all know the benefits; simple, no mutators, thread safe, cache word optimizable. However, I see far too many MyBatis (and iBatis) developers adding no arg constructors to their POJO’s simply so that MyBatis can reflect and set all of the values into the field setters. I’ll ask “why does this POJO need to be mutated” and they quip that it doesn’t, but these setters and protected no-arg constructors are needed by MyBatis. This violates my principle that you make your library work with your code, not the other way around.

Given the lack of good documentation on immutable objects in MyBatis, I hope the following helps folks.

Example Implementation

We need a ResultMap that will tell MyBatis how to map this to a constructor. This is because Java reflection only exposes the constructor parameter types and order, not the names of the parameters (so claim the MyBatis docs, though Spring somehow manages to do this with bean constructors…).

The mapper maps the column names returned in the query to the types on the constructor. It also lays out the order of the arguments. Make sure the constructor argument order exactly matches that of your POJO.

Note: underscores before types map to the primitive value. _long maps to the primitive long type. long maps to the wrapper type java.lang.Long.

<resultMap  id="fooViewMap" type="com.lustforge.FooView">
	<constructor>
		<arg column="id"			javaType="_long"/>
		<arg column="is_dirty"	javaType="_boolean"/>
	</constructor>
</resultMap>

Now make sure your query block points to the mapper via its resultMap attribute. Again confirm that the column names returned exactly match those in the map. Note: the order does not need to match for the query.

<select id="getFooViews" resultMap="fooViewMap">
    <![CDATA[
	    SELECT 
   		foo.id
   		foo.is_dirty

		FROM foo
		-- your query here
	 ]]>
</select>

Finally make sure your POJO constructor matches. It’s also a good idea to leave a note to future developers to update MyBatis if they alter the constructor argument types or order.

public final class FooView {

	private final long id;
	private final boolean isDirty;

	// Prescient comment
	// NOTE: MyBatis depends on parameter order and type for object creation
	// If you change the constructor, update the MyBatis mapper
	public FooView(long id, boolean isDirty) {
	    super();
	    this.id = id;
	    this.isDirty = isDirty;
	}

	public long getId() {
	    return id;
	}

	public boolean isDirty() {
	    return isDirty;
	}
	
	// ... don't forget equals/hashcode and toString as well
}

That was easy and now you’re using best practices. Pat yourself on the back and get busy with your new immutable POJO.


Oct 4 2014

PreRelease: Open ICAO 24 Database

I’ve long been a fan of aviation. When sites like FlightAware came out, I was a huge fan to learn about the contrails around me.  I was even more excited when the revolution in RTLSDR enabled everyone to track ADS-B enabled aircraft for $10.

Sometimes outside on a run or drive, I’d see a high altitude contrail do a 180 and wonder, what just happened? In 1998, short of calling the FAA, you’d never know. Jump to 2014 and you can look up the track on FlightAware or FlightRadar24 (et al) and then pull the ATC tape via LiveATC. Information is becoming freer and freer! That’s the information revolution of the Internet at work.

Imagine my chagrin when running dump1090, realizing there is no freely available database of ICAO24 hex codes to aircraft registrations! Crawl the forums and you’ll find discussions about reverse engineering pay products to extract ICAO 24 codes. Some forum members have even manually amassed spreadsheets of hundreds of thousands of codes and insist on keeping them close to their vest. Others, like AirFrames.org have databases, but rate limit lookups and forbid high transaction rates of lookups and automated usage. Why?

Information Wants to Be Free

The Open Source Software revolution is predicated on the work of a few contributors enhancing the lives and experiences of the whole. People contribute their time to make code that will benefit the public many fold more than what the contributors put in. Software engineers love to do this since it provides the code that drives the internet (i.e. Linux, Apache, BSD, Nginx, Android, etc…) and massively increases the productivity of a single developer because she can freely leverage the high quality works and tools others have made.

So then, why does the ADS-B and aviation hobbyist community not band together to solve this common problem? Why do they resort to reverse engineer paid products and make private exchanges of hex codes through back channels? Partly because there is money to be made and partly because people feel the need to protect the information they’ve collected. However, the reality is that ICAO24 hex codes and airframe registration numbers are public information. This public information can be found in public registries and is being broadcast continuously into the ether 24/7 in the form of ADS-B and ACARS messages. By definition of US copyright law, assemblies of public facts cannot be copyrighted, so the should rightly be set free.

Open ICAO 24

The Open ICAO 24 database shall be an assembly of all the 24 bit hex codes, tail numbers, and aircraft types that I can assemble with my east coast ADS-B/ACARS network that is presently in the buildout phase. Given the small number of codes and tails in the world, operation of such a database and API will be a rounding error down to zero and shall remain freely available. The entire database will also be freely downloadable to anyone and the XML/JSON API will allow instant lookups and any needed frames.

Stay tuned, the beta will be launched shortly and I’ll work to automate it’s population. However, my present physical plant can only process a few thousand registrations per day, so I invite the community can embrace such openness and contribute as well, so the entire ADS-B enthusiast community can benefit from the worldwide network and the efforts of the community members.


Aug 26 2014

Accessing the GWT History Stack

GWT (Google Web Toolkit) does not supply a direct way to know where users have been within your application. However, you can use a simple listener and stack to record and access history events.

The key is the GWT History object. You can listen to its change event to know the user has gone to another Place. The restriction is we don’t know when the user has gone back. This is an inherent state detection problem in the stateless HTTP web. Ideally, it should not matter to your application how a user arrived at the given Place.

We’ll start with an interface to define our new class’ contracts. There is a 16 step limit since we don’t want to keep filling memory with history locations. I’ve added a method to get the last 16 and to get the last place as well.

/**
 * Stack that tracks browser history interactions
 */
public interface HistoryStack { 

	/**
	 * Get up to the last 16 history events. 0 index is the last visited.
	 * @return
	 */
    String[] getStack(); 

    /**
     * Return the last event, if any. Is not the current place, but current -1
     * @return NULL if no history
     */
    String getLast();
}

 

Now for the implementation. Oddly, since we cannot track back events, we can’t really use this as a stack, but rather are placing Places in a fixed size queue. Instead than switch to a queue, I’ve stuck with a stack, which is the classic structure for this use case. Folks might get confused if they saw a “HistoryQueue.”

 

/**
 * Create a stack that updates as we navigate, tracking history
 */
@Singleton
public class HistoryStackImpl implements HistoryStack, ValueChangeHandler<String> { 

    private static final int MAX_LENGTH = 16;

    private final Stack<String> stack = new Stack<String>(); 

    // Instantiate via Gin  
    protected HistoryStackImpl() { 
            History.addValueChangeHandler(this);
    } 

    @Override 
    public void onValueChange(ValueChangeEvent<String> event) {
    	
    	// only store max elements
    	if(stack.size() > MAX_LENGTH) {
    		stack.remove(0);
    	}
        stack.push(event.getValue()); 
    }

    @Override
    public String[] getStack() {
	    // reverse stack so first entry of array is last visited
	    // return defensive copy
	    final String[] arr = new String[stack.size()];
	    int i=0;
	    for(int n=stack.size()-1; n>=0; n--) {
		    arr[i] = stack.get(n);
		    i++;
	    }
	    return arr;
    }

    @Override
    public String getLast() {
	    // null no prior location
	    if(stack.size()<2) {
		    return null;
	    }
	    return stack.get(stack.size()-2);
    } 
}

 

Finally we’ll tell Gin to ginject this into our application for use, starting it up when the app loads.

public class MyClientModule extends AbstractPresenterModule {

	@Override
	protected void configure() {
		
		bind(HistoryStack.class)
			.to(HistoryStackImpl.class)
			.asEagerSingleton(); // history from startup
...

 

Now that was easy. Just inject your history stack into any presenter than needs to make history bases decisions. In my case, I had a user setting editor. I wanted the “Done” button to go back to the Place the user was last on so they could continue their work there, or if they started the app on the Settings page, to take them to the home page. This hack fit the bill perfectly. I hope it does the same for you.

P.S. I must give credit to dindeman for the initial revision.


Aug 21 2014

Jenkins vs. Teamcity – The Better CI Tool

Let’s dispel the myth about Jenkins being the gold standard continuous integration tool. I’m sorry, TeamCity is much better. 

Dispelling the Jenkins CI Myth

I started using Jenkins when it was called Hudson, before the Oracle naming spat. Recently, I downloaded and installed it again and was shocked to see that little appears to have changed in so many years. What’s in a UI? Not much if you’re technical, but geeze, Jenkins still has the aura of an app knocked together during an all night hackathon in 1997 .

Let’s knock the legs from under this myth.

1. Jenkins is Open Source

Many Jenkins fans are FOSS fans. If there is an open source solution, perhaps buggy or poorly maintained, they feel compelled to use it. Much like one can imagine RMS foregoing a life saving treatment if the medical apparatus didn’t run open source code he’d compiled himself.

Be careful though as there are few absolute FOSS purists in practice. Inevitably, people use the best tool for the job at hand. Why does a company write code with 23 FOSS tools/languages on closed source Windows desktops? Probably because it works for them and that special accounting application or antiquated, but stable, engineering software that’s core to the business. Just because other options are Open Source doesn’t make the whole tool chain better in practice.

2. Jenkins is FREE!, TeamCity is Expensive

The Jenkins fan will note that Jenkins is free, but TeamCity costs money. Hiss! Boo!

They’ll not mention you can use the TeamCity CI server and three (3) build agents for FREE. And that you’re only out $100/agent thereafter and $1000 for the CI server. Anyone bought Visual Studio lately? Anyone use the many $5K/seat tools out there? Anyone…use Windows (Debian lover myself) ? They all cost a ton more than Jenkins. Why do you use those rather than the FOSS solution? Perhaps it’s for the quality of the tool or the paid support behind it. Remember, many of us work for profit.

3. We’re an OSS Project, We Can’t Afford Paid Anything

I’m a huge fan of open source projects. I contribute to several. And I frequently spar over what CI tool to use. CloudBeesBuildHive, Travis or your own Jenkins Instance? Fatuously such groups write off TeamCity since it would cost cheddar they don’t have. But that would completely ignore the fact that JetBrains gives away everything for FREE to open source projects.

4. But There’s a Plugin For That!

My first production encounter with Jenkins was a comedy of errors. The team I joined had a mature Jenkins install, but all of quotidian tasks were either manual or cumbersome. For example, hand written jobs to do nothing but free up space from other jobs. Hacks and hacks and duct tape scripts to make the build chains we used. And throw in a monthly inopportune crash for good measure.

I was aghast. Everything folks had wasted their time on via various scripts and manual efforts was a standard, default, out of the box feature in TeamCity. But stand back if you ask a Jenkins fan about this. They will recant “but there’s a plugin for that!” Perhaps there is. A non-code reviewed plugin that does part of what you want and was last updated 19 months ago and a few major releases hence. Or, there will be three plugins to do almost the same task, and most of it might work, but check the GitHub page and recompile if you want that functionality.

This is sad given that the configurations TC has out of the box could have skipped $10K in developer efforts over the last two years. But, you know, TC isn’t FREE!

Other Bones to Pick

Some other things that Jenkins could correct to burnish their product:

Jenkins…

  • NO SECURITY by default? Why? TC secures out of the box. Common man.
  • No PreTested Commit – a TC standard that’s integrated with Intellij/Eclipse – Jenkins no intention to add
  • Defaults to port 8080 … way too common a port for DEV’s. Will conflict with all Java devs
  • Startup logs are to .err.log? Why?
  • Lack of timestamps in 2 of 3 logs
  • Plugin install still triggers server restart, even if no plugins were updated/installed
  • Coarseness of “Auto-Refresh” – keeps reloading documentation pages! Is it 1998? XHR Anyone?

Conclusions and Disclaimers

Give TeamCity a try. I’ve been loving it for 4 years now and use it on every project. Do I work for JetBrains? Nope. Then why write this? Because everyone I talk to claims Jenkins is God’s gift to integration. It makes me think I’m must be taking crazy pills, so I’ve written this so someone out there can make a more informed CI tooling decision.

 

Don’t Take My Word For It

For all your know I’m a shill that screams at fire hydrants in the night. Read the top hits for “TeamCity vs Jenkins” and you’ll discover the same thesis.

 

 


Aug 2 2014

Ringing the Cygwin Terminal Bell

Let’s say you’ve sadly been running several processes in a row. They take time, so you catch up on your blog reading while they run, but have to keep checking back on the terminal to see if they’re done. Wouldn’t it be nice to know when a command is complete? Easy, just have it ring the Cygwin Terminal Bell!

For example, download a big file, untar it, and let me know when you’re done:

 

wget fatFile.tar.gz; tar -zxvf fatFile.tar.gz; echo -e "\a";

 

So, be sure to \a to ring the bell from now on:

echo -e "\a"

Jun 12 2014

Restoring the Chrome GWT DevMode Plugin

noNappi
Did your DevMode Chrome extension stop working recently? Welcome to the party. The powers of divine wisdom on the Google Chrome team decided that NPAPI was a superannuated security hole and must die. The fact that they proposed no clear alternative solution has led many a plugin (Java, GWT DevMode, Linx Garmin Connect, VDPAU, GNOME Shell Integration, VMware VSphere Client, Nemid) to wither and die. But what about Flash!! Well, to keep important plugins from being impacted, they’ve been whitelisted, but for the rest of us who depend on the Chrome DevMode Plugin… too bad.

The timing is unfortunate, but the number of Linux users that require NPAPI plugins that aren’t Flash is just too small to justify this effort.

Matt Giuca

To boot, the aforementioned plugins “could be rewritten” from scratch in javascript, just for Chrome, using the various new API’s, but it will be a swift death blow for many an OSS plugin where no one has the time to completely rewrite the project. Will a phoenix rise from the ashes? Certainly. It will be an opportunity for many to reinvent the wheel using toothpicks. However, in the mean time, many of us will be without wheels, especially in the Google Web Toolkit dev community.

Retrograde GWT Plugin Install

You’ll need to revert to the less secure Chrome 34 build. Generally speaking, this is a bad idea, so be careful with this. Don’t do things requiring security on it. Sadly, the Chrome team has left us little choice, while also saying you shouldn’t do the above 😉 .

  1. Download and install the Chrome 34 Portable Installer (has no updater)
  2. Reenable drag and drop install of disabled .crx extensions
  3. Install GWT Chrome extension from the Chrome Web Store

Hopefully that works for you. Now you can continue developing GWT until Chrome adds more road blocks, but you should really consider moving to SuperDevMode. If you’re keen to help, please contribute to the SDBG Eclipse Dev Mode replacement project.