Glen Mazza's Weblog

« Older | Main

https://glenmazza.net/blog/date/20181007 Sunday October 07, 2018

Using functions with a single generic method to convert lists

For converting from a Java collection say List<Foo> to any of several other collections List<Bar1>, List<Bar2>, ... rather than create separate FooListToBar1List, FooListToBar2List, ... methods a single generic FooListToBarList method and a series of Foo->Bar1, Foo->Bar2... converter functions can be more succinctly used. The below example converts a highly simplified List of SaleData objects to separate Lists of Customer and Product information, using a common generic saleDataListToItemList(saleDataList, converterFunction) method along with passed-in converter functions saleDataToCustomer and saleDataToProduct. Of particular note is how the converter functions are specified in the saleDataListToItemList calls. In the case of saleDataToCustomer, which takes two arguments (the SailData object and a Region string), a lambda expression is used, while the Product converter can be specified as a simple method reference due to it having only one parameter (the SailData object).

import java.util.ArrayList;
import java.util.List;
import java.util.Optional;
import java.util.function.Function;
import java.util.stream.Collectors;
import java.util.stream.Stream;

public class Main {

    public static void main(String[] args) {

        List saleDataList = new ArrayList<>();
        saleDataList.add(new SaleData("Bob", "radio"));
        saleDataList.add(new SaleData("Sam", "TV"));
        saleDataList.add(new SaleData("George", "laptop"));

        List customerList = saleDataListToItemList(saleDataList, sd -> Main.saleDataToCustomerWithRegion(sd, "Texas"));
        System.out.println("Customers: ");
        customerList.forEach(System.out::println);

        List productList = saleDataListToItemList(saleDataList, Main::saleDataToProduct);
        System.out.println("Products: ");
        productList.forEach(System.out::println);
    }

    private static  List saleDataListToItemList(List sdList, Function converter) {
        // handling potentially null sdList:  https://stackoverflow.com/a/43381747/1207540
        return Optional.ofNullable(sdList).map(List::stream).orElse(Stream.empty()).map(converter).collect(Collectors.toList());
    }

    private static Product saleDataToProduct(SaleData sd) {
        return new Product(sd.getProductName());
    }

    private static Customer saleDataToCustomerWithRegion(SaleData sd, String region) {
        return new Customer(sd.getCustomerName(), region);
    }

    private static class SaleData {
        private String customerName;
        private String productName;

        SaleData(String customerName, String productName) {
            this.customerName = customerName;
            this.productName = productName;
        }

        String getProductName() {
            return productName;
        }

        String getCustomerName() {
            return customerName;
        }

    }

    private static class Product {
        private String name;

        Product(String name) {
            this.name = name;
        }

        @Override
        public String toString() {
            return "Product{" +
                    "name='" + name + '\'' +
                    '}';
        }
    }

    private static class Customer {
        private String name;
        private String region;

        Customer(String name, String region) {
            this.name = name;
            this.region = region;
        }

        @Override
        public String toString() {
            return "Customer{" +
                    "name='" + name + '\'' +
                    ", region='" + region + '\'' +
                    '}';
        }
    }

}

Output from running:

Customers: 
Customer{name='Bob', region='Texas'}
Customer{name='Sam', region='Texas'}
Customer{name='George', region='Texas'}
Products: 
Product{name='radio'}
Product{name='TV'}
Product{name='laptop'}

https://glenmazza.net/blog/date/20180624 Sunday June 24, 2018

TightBlog 3.0 Released!

My third annual release currently powering this blog. See here for a listing of enhancements over the previous TightBlog 2.0, here for all the enhancements over the original Apache Roller 5.1.0 I had forked in 2015. Screenshots are here.

https://glenmazza.net/blog/date/20180218 Sunday February 18, 2018

Sending Custom Metrics from Spring Boot to Datadog

This tutorial shows how Datadog's API can be used to send custom metrics for a Spring Boot web application and see how the results can be viewed graphically from Datadog dashboards. Samantha Drago's blog post provides a background of Datadog custom metrics which require a paid Datadog account. Note as an alternative not covered here, custom metrics can be defined via JMX with Datadog's JMX Integration used to collect them, this integration in particular provides a list of standard metrics that can be used even with the free DD account.

To facilitate metric accumulation and transferring of metrics to Datadog, Spring Boot's ExportMetricReader and ExportMetricWriter implementations will be used. Every 5 milliseconds by default (adjustable via the spring.metrics.export.delay-millis property), all MetricReader implementations marked @ExportMetricReader will have their values read and written to @ExportMetricWriter-registered MetricWriters. The class ("exporter") that handles this within Spring Boot is the MetricCopyExporter, which treats metrics starting with a "counter." as a counter (a metric that reports deltas on a continually growing statistic, like web hits) and anything else as a gauge (an standalone snapshot value at a certain timepoint, such as JVM heap usage.) Note, however, Datadog apparently does not support "counter" type metric collection using its API (everything is treated as a gauge), I'll be showing at the end how a summation function can be used within Datadog to work around that.

Spring Boot already provides several web metrics that can be sent to Datadog without any explicit need to capture those metrics, in particular, the metrics listed here that start with "counter." or "gauge.". These provide commonly requested statistics such as number of calls to a website and average response time in milliseconds. The example below will report those statistics to Datadog along with application-specific "counter.foo" and "gauge.bar" metrics that are maintained by our application.

  1. Create the web application. For our sample, Steps #1 and #2 of the Spring Boot to Kubernetes tutorial can be followed for this. Ensure you can see "Hello World!" at localhost:8080 before proceeding.

  2. Modify the Spring Boot application to send metrics to Datadog. Note for tutorial brevity I'm condensing the number of classes that might otherwise be used to send metrics to DD. Additions/updates to make:

    • In the project build.gradle, the gson JSON library and Apache HTTP Client libraries need to be added to support the API calls to DD:

      build.gradle:
      dependencies {
      	compile('com.google.code.gson:gson:2.8.2')
      	compile('org.apache.httpcomponents:httpclient:4.5.3')
      	...other libraries...
      }
      
    • The DemoMetricReaderWriter.java needs to be included, it serves as both the reader of our application-specific metrics (not those maintained by Spring Boot--those are handled by BufferMetricReader included within the framework) and as the writer of all metrics (app-specific and Spring Boot) to Datadog. Please see the comments within the code for implementation details.

      DemoMetricReaderWriter.java:
      package com.gmazza.demo;
      
      import com.google.gson.Gson;
      import com.google.gson.GsonBuilder;
      import com.google.gson.JsonPrimitive;
      import com.google.gson.JsonSerializer;
      import org.apache.http.HttpEntity;
      import org.apache.http.StatusLine;
      import org.apache.http.client.methods.CloseableHttpResponse;
      import org.apache.http.client.methods.HttpPost;
      import org.apache.http.entity.ByteArrayEntity;
      import org.apache.http.impl.client.CloseableHttpClient;
      import org.apache.http.impl.client.HttpClients;
      import org.apache.http.util.EntityUtils;
      import org.slf4j.Logger;
      import org.slf4j.LoggerFactory;
      import org.springframework.beans.factory.annotation.Value;
      import org.springframework.boot.actuate.metrics.Metric;
      import org.springframework.boot.actuate.metrics.reader.MetricReader;
      import org.springframework.boot.actuate.metrics.writer.Delta;
      import org.springframework.boot.actuate.metrics.writer.MetricWriter;
      import org.springframework.stereotype.Component;
      
      import javax.annotation.PostConstruct;
      import java.io.Closeable;
      import java.io.IOException;
      import java.math.BigDecimal;
      import java.util.ArrayList;
      import java.util.Arrays;
      import java.util.Date;
      import java.util.HashMap;
      import java.util.List;
      import java.util.Map;
      
      @Component
      public class DemoMetricReaderWriter implements MetricReader, MetricWriter, Closeable {
      
          private static final Logger logger = LoggerFactory.getLogger(DemoMetricReaderWriter.class);
      
          private Metric<Integer> accessCounter = null;
      
          private Map<String, Metric<?>> metricMap = new HashMap<>();
      
          private static final String DATADOG_SERIES_API_URL = "https://app.datadoghq.com/api/v1/series";
      
          @Value("${datadog.api.key}")
          private String apiKey = null;
      
          private CloseableHttpClient httpClient;
      
          private Gson gson;
      
          @PostConstruct
          public void init() {
              httpClient = HttpClients.createDefault();
      
              // removes use of scientific notation, see https://stackoverflow.com/a/18892735
              GsonBuilder gsonBuilder = new GsonBuilder();
              gsonBuilder.registerTypeAdapter(Double.class, (JsonSerializer<Double>) (src, typeOfSrc, context) -> {
                  BigDecimal value = BigDecimal.valueOf(src);
                  return new JsonPrimitive(value);
              });
      
              this.gson = gsonBuilder.create();
          }
      
          @Override
          public void close() throws IOException {
              httpClient.close();
          }
      
          // besides the app-specific metrics defined in the below method, Spring Boot also exports metrics
          // via its BufferMetricReader, for those with the "counter." or "gauge.*" prefix here:
          // https://docs.spring.io/spring-boot/docs/current/reference/html/production-ready-metrics.html
          public void updateMetrics(long barGauge) {
              // Using same timestamp for both metrics, makes it easier to match/compare if desired in Datadog
              Date timestamp = new Date();
      
              logger.info("Updating foo-count and bar-gauge of {} for web call", barGauge);
      
              // Updates to values involve creating new Metrics as they are immutable
      
              // Because this Metric starts with a "counter.", MetricCopyExporter used by Spring Boot will treat this
              // as a counter and not a gauge when reading/writing values.
              accessCounter = new Metric<>("counter.foo",
                      accessCounter == null ? 0 : accessCounter.getValue() + 1, timestamp);
              metricMap.put("counter.foo", accessCounter);
      
              // Does not start with "counter.", therefore a gauge to MetricCopyExporter.
              metricMap.put("gauge.bar", new Metric<>("gauge.bar", barGauge, timestamp));
          }
      
          // required by MetricReader
          @Override
          public Metric<?> findOne(String metricName) {
              logger.info("Calling findOne with name of {}", metricName);
              return metricMap.get(metricName);
          }
      
          // required by MetricReader
          @Override
          public Iterable<Metric<?>> findAll() {
              logger.info("Calling findAll(), size of {}", metricMap.size());
              return metricMap.values();
          }
      
          // required by MetricReader
          @Override
          public long count() {
              logger.info("Requesting metricMap size: {}", metricMap.size());
              return metricMap.size();
          }
      
          // required by CounterWriter (in MetricWriter), used only for counters
          @Override
          public void increment(Delta<?> delta) {
              logger.info("Counter being written: {}: {} at {}", delta.getName(), delta.getValue(), delta.getTimestamp());
              if (apiKey != null) {
                  sendMetricToDatadog(delta, "counter");
              }
          }
      
          // required by CounterWriter (in MetricWriter), but implementation optional (MetricCopyExporter doesn't call)
          @Override
          public void reset(String metricName) {
              // not implemented
          }
      
          // required by GaugeWriter (in MetricWriter), used only for gauges
          @Override
          public void set(Metric<?> value) {
              logger.info("Gauge being written: {}: {} at {}", value.getName(), value.getValue(), value.getTimestamp());
              if (apiKey != null) {
                  sendMetricToDatadog(value, "gauge");
              }
          }
      
          // API to send metrics to DD is defined here:
          // https://docs.datadoghq.com/api/?lang=python#post-time-series-points
          private void sendMetricToDatadog(Metric<?> metric, String metricType) {
              // let's add an app prefix to our values to distinguish from other apps in DD
              String dataDogMetricName = "app.glendemo." + metric.getName();
      
              logger.info("Datadog call for metric: {} value: {}", dataDogMetricName, metric.getValue());
      
              Map<String, Object> data = new HashMap<>();
      
              List<List<Object>> points = new ArrayList<>();
              List<Object> singleMetric = new ArrayList<>();
              singleMetric.add(metric.getTimestamp().getTime() / 1000);
              singleMetric.add(metric.getValue().longValue());
              points.add(singleMetric);
              // additional metrics could be added to points list providing params below are same for them
      
              data.put("metric", dataDogMetricName);
              data.put("type", metricType);
              data.put("points", points);
              // InetAddress.getLocalHost().getHostName() may be accurate for your "host" value.
              data.put("host", "localhost:8080");
      
              // optional, just adding to test
              data.put("tags", Arrays.asList("demotag1", "demotag2"));
      
              List<Map<String, Object>> series = new ArrayList<>();
              series.add(data);
      
              Map<String, Object> data2 = new HashMap<>();
              data2.put("series", series);
      
              try {
                  String urlStr = DATADOG_SERIES_API_URL + "?api_key=" + apiKey;
                  String json = gson.toJson(data2);
                  byte[] jsonBytes = json.getBytes("UTF-8");
      
                  HttpPost httpPost = new HttpPost(urlStr);
                  httpPost.addHeader("Content-type", "application/json");
                  httpPost.setEntity(new ByteArrayEntity(jsonBytes));
      
                  try (CloseableHttpResponse response = httpClient.execute(httpPost)) {
                      StatusLine sl = response.getStatusLine();
                      if (sl != null) {
                          // DD sends 202 (accepted) if it's happy
                          if (sl.getStatusCode() == 202) {
                              HttpEntity responseEntity = response.getEntity();
                              EntityUtils.consume(responseEntity);
                          } else {
                              logger.warn("Problem posting to Datadog: {} {}", sl.getStatusCode(), sl.getReasonPhrase());
                          }
                      } else {
                          logger.warn("Problem posting to Datadog: response status line null");
                      }
                  }
      
              } catch (Exception e) {
                  logger.error(e.getMessage(), e);
              }
          }
      }
      
    • The DemoApplication.java file needs updating to wire in the DemoMetricReaderWriter. It's "Hello World" endpoint is also updated to send a duration gauge value (similar to but smaller than the more complete gauge.response.root Spring Boot metric) to the DemoMetricReaderWriter.

      DemoApplication.java:
      package com.gmazza.demo;
      
      import org.springframework.boot.SpringApplication;
      import org.springframework.boot.actuate.autoconfigure.ExportMetricReader;
      import org.springframework.boot.actuate.autoconfigure.ExportMetricWriter;
      import org.springframework.boot.autoconfigure.SpringBootApplication;
      import org.springframework.context.annotation.Bean;
      import org.springframework.web.bind.annotation.RequestMapping;
      import org.springframework.web.bind.annotation.RestController;
      
      @SpringBootApplication
      @RestController
      public class DemoApplication {
      
          public static void main(String[] args) {
              SpringApplication.run(DemoApplication.class, args);
          }
      
          private DemoMetricReaderWriter demoMetricReaderWriter = new DemoMetricReaderWriter();
      
          @Bean
          @ExportMetricReader
          @ExportMetricWriter
          DemoMetricReaderWriter getReader() {
              return demoMetricReaderWriter;
          }
      
          @RequestMapping("/")
          String home() throws Exception {
              long start = System.currentTimeMillis();
      
              // insert up to 2 second delay for a wider range of response times
              Thread.sleep((long) (Math.random() * 2000));
      
              // let that delay become the gauge.bar metric value
              long barValue = System.currentTimeMillis() - start;
      
              demoMetricReaderWriter.updateMetrics(barValue);
              return "Hello World!";
          }
      }
      
    • The application.properties in your resources folder is where you provide your Datadog API key as well as some other settings. A few other spring.metrics.export.* settings are also available.

      application.xml:
      # Just logging will occur if api.key not defined
      datadog.api.key=your_api_key_here
      # Datadog can keep per-second metrics, but using every 15 seconds per Datadog's preference
      spring.metrics.export.delay-millis=15000
      # disabling security for this tutorial (don't do in prod), allows seeing all metrics at http://localhost:8080/metrics
      management.security.enabled=false
      
  3. Make several web calls to http://localhost:8080 from a browser to send metrics to Datadog. May also wish to access metrics at .../metrics a few times, you'll note the app-specific metrics counter.foo and gauge.bar become listed in the web page that is returned, also that accessing /metrics sends additional *.metrics (counter.status.200.metrics and gauge.response.metrics) stats to Datadog. We configured the application in application.properties to send Datadog metrics every 15 seconds, if running in your IDE, you can check the application logging in the Console window to see the metrics being sent.

  4. Log into Datadog and view the metrics sent. Two main options from the left-side Datadog menu: Metrics -> Explorer and Dashboards -> New Dashboard. For the former, one can search on the metric names in the Graph: field (see upper illustration below), with charts of the data appearing immediately to the right. For the latter (lower illustration), I selected "New Timeboard" and added three Timeseries and one Query Value for the two main Spring Boot and two application-specific metrics sent.


    Metrics Explorer

    Datadog TimeBoard

    Again, as the "counter" type is presently not supported via the Datadog API, for dashboards the cumulative sum function can be used to have the counter metrics grow over time in charts:

    Cumulative Sum function

https://glenmazza.net/blog/date/20180212 Monday February 12, 2018

TightBlog 2.0.4 Patch Release

I made a 2.0.4 Patch Release of TightBlog to fix two pressing issues, the blog hit counter was not resetting at the end of each day properly and the "Insert Media File" popup on the blog entry edit page was also not working. Upgrading is as simple as swapping out the 2.0.3 WAR with this one. For first-time installs, see the general installation instructions, Linode-specific instructions are here.

Work on the future TightBlog 3.0 is continuing, it has much simpler blog template extraction, better caching design, and uses Thymeleaf instead of Velocity as the blog page template language. Non-test Java source files have fallen to 126 vs. the 146 in TightBlog 2.0, and one fewer database table is needed (now down to 12).

https://glenmazza.net/blog/date/20180211 Sunday February 11, 2018

Hosting Spring Boot Applications on Kubernetes

Provided here are simple instructions for deploying a "Hello World" Spring Boot application to Kubernetes, assuming usage of Amazon Elastic Container Service (ECS) including its Elastic Container Repository (ECR). Not covered are Kubernetes installation as well as proxy server configuration (i.e., accessibility of your application either externally or within an intranet) which would be specific to your environment.

  1. Create the Spring Boot application via the Spring Initializr. I chose a Gradle app with the Web and Actuator dependencies (the latter to obtain a health check /health URL), as shown in the following illustration.


    References: Getting Started with Spring Boot / Spring Initializr

  2. Import the Spring Boot application generated by Initializr into your favorite Java IDE and modify the DemoApplication.java to expose a "Hello World" endpoint:

    package com.gmazza.demo;
    
    import org.springframework.boot.SpringApplication;
    import org.springframework.boot.autoconfigure.SpringBootApplication;
    import org.springframework.boot.*;
    import org.springframework.boot.autoconfigure.*;
    import org.springframework.stereotype.*;
    import org.springframework.web.bind.annotation.*;
    
    @SpringBootApplication
    @RestController
    public class DemoApplication {
    
    	public static void main(String[] args) {
    		SpringApplication.run(DemoApplication.class, args);
    	}
    
    	@RequestMapping("/")
    	String home() {
    		return "Hello World!";
    	}
    }
    

    Let's make sure the application works standalone. From a command-line window in the Demo root folder, run gradle bootRun to activate the application. Ensure you can see "Hello World!" from a browser window at localhost:8080 and the health check at localhost:8080/health ({"status":"UP"}") before proceeding.

  3. Create a Docker Image of the Spring Boot application. Steps:

    1. Create a JAR of the demo application: gradle clean build from the Demo folder will generate a demo-0.0.1-SNAPSHOT.jar in the demo/build/libs folder.

    2. Create a new folder separate from the demo application, any name, say "projdeploy". Copy the demo JAR into this directory and also place there a new file called "Dockerfile" within it having the following code:

      FROM openjdk:8u131-jdk-alpine
      RUN echo "networkaddress.cache.ttl=60" >> /usr/lib/jvm/java-1.8-openjdk/jre/lib/security/java.security
      ADD demo-0.0.1-SNAPSHOT.jar demo.jar
      ENTRYPOINT ["java","-Xmx2000m", "-Dfile.encoding=UTF-8","-jar","demo.jar" ]
      

      The above command creates a docker image building off of the OpenJDK image along with a recommended adjustment to the caching TTL. The ADD command performs a rename of the JAR file, stripping off the version from the name for subsequent use in the ENTRYPOINT command.

    3. Next, we'll generate the docker image. From the projdeploy folder, docker build -t demo:0.0.1-SNAPSHOT. Run the docker images command to view the created image in your local respository:

      $ docker images
      REPOSITORY                                                 TAG                                 IMAGE ID            CREATED             SIZE
      demo                                                       0.0.1-SNAPSHOT                      7139669729bf        10 minutes ago      116MB
      

      Repeated docker build commands with the same repository and tag will just overwrite the previous image. Images can also be deleted using docker rmi -f demo:0.0.1-SNAPSHOT.

  4. Push the target image to ECR. The ECR documentation provides more thorough instructions. Steps:

    1. Install the AWS Command-Line Interface (AWS CLI). Step #1 of AWS guide gives the OS-specific commands to use. In the aws ecr get-login... command you may find it necessary to specify the region where your ECR is hosted (e.g., --region us-west-1). Ensure you can log in from the command line (it will output "Login Succeeded") before continuing.

    2. Create an additional tag for your image to facilitate pushing to ECR, as explained in Step #4 in the ECR w/CLI guide. For this example:

      docker tag demo:0.0.1-SNAPSHOT your_aws_account_id.dkr.ecr.your_ecr_region.amazonaws.com/demo:0.0.1-SNAPSHOT
      

      Note in the above command, the "demo" at the end refers to the name of the ECR repository where the image will ultimately be placed, if not already existing it will need to be created beforehand for the next command to be successful or another existing repository name used. Also, see here for determining your account ID. You may wish to run docker images again to confirm the image was tagged.

    3. Push the newly tagged image to AWS ECR (replacing the "demo" below if you're using another ECR repository):

      docker push your_aws_account_id.dkr.ecr.your_ecr_region.amazonaws.com/demo:0.0.1-SNAPSHOT
      
    4. At this stage, good to confirm that the image was successfully loaded by viewing it in ECR repositories (URL to do so should be https://console.aws.amazon.com/ecs/home?region=your_ecr_region#/repositories.)

  5. Deploy your new application to Kubernetes. Make sure you have kubectl installed locally for this process. Steps:

    1. Create a deployment.yaml for the image. It is in this configuration file that your image's deployment, declare the image to use, and its service and ingress objects. A sample deployment.yaml would be as follows:

      deployment.yaml:

      kind: Deployment
      apiVersion: extensions/v1beta1
      metadata:
        name: demo
      spec:
        replicas: 1
        template:
          metadata:
            labels:
              app: demo
          spec:
            containers:
            - name: demo
              image: aws_acct_id.dkr.ecr.region.amazonaws.com/demo:0.0.1-SNAPSHOT 
              ports:
              - containerPort: 80
              resources:
                requests:
                  memory: "500Mi"
                limits:
                  memory: "1000Mi"
              readinessProbe:
                httpGet:
                  scheme: HTTP
                  path: /health
                  port: 8080
                initialDelaySeconds: 15
                periodSeconds: 5
                timeoutSeconds: 5
                successThreshold: 1
                failureThreshold: 20
              livenessProbe:
                httpGet:
                  scheme: HTTP
                  path: /health
                  port: 8080
                initialDelaySeconds: 15
                periodSeconds: 15
                timeoutSeconds: 10
                successThreshold: 1
                failureThreshold: 3
      ---
      kind: Service
      apiVersion: v1
      metadata:
        name: demo
      spec:
        selector:
          app: demo
        ports:
          - protocol: TCP
            port: 80
            targetPort: 8080
      ---
      kind: Ingress
      apiVersion: extensions/v1beta1
      metadata:
        name: demo
        annotations:
          kubernetes.io/ingress.class: "nginx"
      spec:
        rules:
        - host: demo.myorganization.org
          http:
            paths:
            - path:
              backend:
                serviceName: demo
                servicePort: 80
      

      Take particular note of the bolded deployment image (must match what was deployed to ECR) and the Ingress loadbalancer host, i.e., the URL to be used to access the application.

    2. Deploy the application onto Kubernetes. The basic kubectl create (deploy) command is as follows:

      kubectl --context ??? --namespace ??? create -f deployment.yaml
      

      To determine the correct context and namespace values to use, first enter kubectl config get-contexts to get a table of current contexts, the values will be under in the second column, "Name". If your desired context is not the current one (first column), enter kubectl config use-context context-name to switch to that one. Either way, then enter kubectl get namespaces for a listing of available namespaces under that context, picking one of those or creating a new namespace.

      Once your application is created, good to go to the Kubernetes dashboard to confirm it has successfully deployed. In the "pod" section, click the next-to-last column (the one with the horizontal lines) for the deployed pod to see startup logging including error messages, if any.

    3. Determine the IP address of the deployed application to configure routing. The kubectl --context ??? --namespace ??? get ingresses command (with context and namespace determined as before) will give you a list of configured ingresses and their IP address, configuration of the latter with Route 53 (at a minimum) will probably be needed for accessing your application.

      Once the application URL is accessible, you should be able to retrieve the same "Hello World!" and health check responses you had obtained in the first step from running locally.

    4. To undeploy the application, necessary for redeploying it via kubectl create, the application, service, and ingress can be individually deleted from the Kubernetes Dashboard. As an alternative, the following kubectl commands can be issued to delete the application's deployment, service, and ingress:

      kubectl --context ??? --namespace ??? delete deployment demo
      kubectl --context ??? --namespace ??? delete service demo
      kubectl --context ??? --namespace ??? delete ingress demo
      

      If it is desired to just reload the current application, deletion of the application's pod by default will accomplish that.

https://glenmazza.net/blog/date/20171126 Sunday November 26, 2017

Streaming Salesforce notifications to Kafka topics

Salesforce CRM's Streaming API allows for receiving real-time notifications of changes to records stored in Salesforce. To enable this functionality, the Salesforce developer creates a PushTopic channel backed by a SOQL query that defines the changes the developer wishes to be notified of. Record modifications (Create, Update, Delete, etc.) fitting the SOQL query are sent on the channel and can be picked up by external systems. Salesforce provides instructions on how its Workbench tool can be used to create, view and test PushTopic notifications, which is a useful first step. For Java clients, Salesforce also provides an tutorial using an EMPConnector sample project. At least the username-password version of that sample worked following the instructions given in the tutorial, but the tutorial was vague on how to get the bearer token version to work.

For Kafka, Confluent's Jeremy Custenborder has written a Salesforce source connector for placing notifications from a Salesforce PushTopic to a Kafka topic. His simplified instructions in the GitHub README assume usage of Confluent's wrap of Kafka including the Confluent-only Schema Registry with Avro-formatted messages. I'm expanding on his instructions a bit to make them more end-to-end and also to show how the connector can be used with pure Kafka, no schema registry, and JSON-formatted messages:

  1. Follow the API Streaming Quick Start Using Workbench to configure your SF PushTopic. Before proceeding, make sure records created from the Workbench generate notifications on the SF PushTopic. It's a quick, efficient tutorial.

  2. Create a Connected Application from your force.com account. Connected Apps allow for external access to your Salesforce data. I mostly relied on Calvin Froedge's article for configuring the Connected App.

  3. (Optional) To confirm the Connected App is working properly before moving on to Kafka, you may wish to run the EMPConnector sample mentioned above.

  4. If you haven't already, download a Kafka distribution and expand it, its folder will be referred to as KAFKA_HOME below.

  5. Clone and build the Salesforce source connector in a separate directory.

  6. Open a terminal window with five tabs. Unless stated otherwise, all commands should be run from the KAFKA_HOME directory.

    1. First and second tabs, activate ZooKeeper and the Kafka broker using the commands listed in Step #2 of the Kafka Quick Start.

    2. Third tab, create a Kafka topic to receive the notifications placed on the Salesforce PushTopic:

      bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic sf_invoice_statement__c
      

      The name of the topic can be different from the one given above, just be sure to update the connector configuration file given in the next step accordingly.

    3. Fourth tab, start the Salesforce Connector. First, navigate to the config folder under the base folder of the connector and make a MySourceConnector.properties file:

      name=connector1
      tasks.max=1
      connector.class=com.github.jcustenborder.kafka.connect.salesforce.SalesforceSourceConnector
      
      # Set these required values
      salesforce.username=your.force.com.username@xxx.com
      salesforce.password=your.force.com.password
      salesforce.password.token=xxxx
      salesforce.consumer.key=xxxx
      salesforce.consumer.secret=xxxx
      salesforce.push.topic.name=InvoiceStatementUpdates
      salesforce.push.topic.create=false
      kafka.topic=sf_${_ObjectType}
      

      Notes:

      • The password token can be obtained via Force.com's Reset My Security Token screen.
      • The consumer key and secret are available from the Connected App configuration within Force.com.
      • Having already created the Salesforce PushTopic, I set salesforce.push.topic.create to false in the configuration above. Alternatively, I could have set it to true and provided the salesforce.object property to have the Salesforce Connector dynamically create the PushTopic. However, the auto-created PushTopic did not (at least for me) do a good job of bringing all the fields of the object ("description" was missing from InvoiceStatement notifications); manually creating the PushTopic will (in most cases) provide the fields given in the SELECT list of the SOQL query you create for the PushTopic.

      Next, from the Connector base folder, create the CLASSPATH and activate the connector as follows:

      export CLASSPATH="$(find target/ -type f -name '*.jar'| tr '\n' ':')"
      $KAFKA_HOME/bin/connect-standalone.sh $KAFKA_HOME/config/connect-standalone.properties config/MySourceConnector.properties 
      

      Important: Note the export statement above is different than the one in the GitHub instructions, I created a not-yet-applied PR to fix the latter.

    4. For the fifth tab, we need to create a consumer to read the SF messages that the SF connector places on our Kafka topic, this worked for me:

      bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic sf_invoice_statement__c --from-beginning
      
  7. Use the Workbench Insert page to create a new Invoice Statement record, and view the Kafka consumer output to confirm the Kafka topic received the notification:

    Once created, you should see a message for it output by the Kafka consumer:

    {"schema":
       {"type":"struct","fields":[
          {"type":"string","optional":false,"doc":"Unique identifier for the object.","field":"Id"},
          {"type":"string","optional":true,"field":"OwnerId"},
          {"type":"boolean","optional":true,"field":"IsDeleted"},
          {"type":"string","optional":true,"field":"Name"},
          {"type":"int64","optional":true,"name":"org.apache.kafka.connect.data.Timestamp","version":1,"field":"CreatedDate"},
          {"type":"string","optional":true,"field":"CreatedById"},
          {"type":"int64","optional":true,"name":"org.apache.kafka.connect.data.Timestamp","version":1,"field":"LastModifiedDate"},
          {"type":"string","optional":true,"field":"LastModifiedById"},
          {"type":"int64","optional":true,"name":"org.apache.kafka.connect.data.Timestamp","version":1,"field":"SystemModstamp"},
          {"type":"string","optional":true,"field":"Status__c"},
          {"type":"string","optional":true,"field":"Description__c"},
          {"type":"string","optional":true,"field":"_ObjectType"},
          {"type":"string","optional":true,"field":"_EventType"}],
       "optional":false,"name":"com.github.jcustenborder.kafka.connect.salesforce.Invoice_Statement__c"},
       "payload":{"Id":"a001I000004PNEZQA4","OwnerId":null,"IsDeleted":null,"Name":"INV0014","CreatedDate":null,
          "CreatedById":null,"LastModifiedDate":null,"LastModifiedById":null,"SystemModstamp":null, "Status__c":"Negotiating",
          "Description__c":"Hello 11/27","_ObjectType":"Invoice_Statement__c","_EventType":"created"}}
    

https://glenmazza.net/blog/date/20171124 Friday November 24, 2017

TightBlog 3.0 Status

I'm expecting TightBlog 3.0 to be released sometime middle of next year. TightBlog 3.0 switches the blog template processing engine from Velocity to Thymeleaf, and I'm presently in the process of updating the blog themes to use the latter (very happy with the switch, by the way.) My other main goal for 3.0 is to have solid unit test coverage for the product--having dropped from Roller's 493 Java source files to around 140 so far, that's a much more feasible proposition today. I put in comprehensive unit tests for several of the classes so far, and in the process of refactoring the code to facilitate that, as expected, the code became much better as a result. Besides the refactoring, I could spot several cobwebs that could be cleared out that weren't apparent to me during earlier passes through the code. I'm looking forward to more unit tests so the other source files will have like improvement.

The TightBlog issues page lists several other things I'd like to get in for 3.0.

https://glenmazza.net/blog/date/20170902 Saturday September 02, 2017

TightBlog 2.0.3 Released

Some minor fixes and other updates made in 2.0.3, please see the release notes on GitHub for more info.

https://glenmazza.net/blog/date/20170820 Sunday August 20, 2017

Deploying TightBlog on Linode

Steps I followed to deploy TightBlog on Linode:

Linode preparation:

  1. Sign up for Linode. You may wish to check for any Linode promotion codes for starting credit. I used the $10/month plan providing 2 GB of RAM which seems plenty for my Tomcat & MySQL setup.
  2. For the starting image for my new Linode, I used Ubuntu 16.04 LTS, partitioning 30208 MB for it and 512 MB for a separate swap image.
  3. I followed the remainder of the Getting Started Guide and most of the recommendations of the subsequent Securing Your Server guide.
  4. If you don't want the default Linode domain name of lixxxx-yy.members.linode.com (as you probably don't) go to a domain name registrar such as Namecheap or Google Domains to rent your desired domain. I used the latter to obtain glenmazza.net and then configured it for my linode following these instructions.
  5. As recommended in the Securing Your Server guide, I used key-based authentication allowing me to easily connect to my linode using "ssh (or sftp) glenmazza.net" from a command-line on my home computer. (If you haven't gotten a custom domain name yet, you'll find the default name and IP address in the Public IPs section on the Remote Access tab in the Linode Manager.)
  6. I created a ~/tbfiles folder (owned by a non-root normal user account) as a staging area for files I'm uploading to my linode as well as to hold the TightBlog media file directories and (if desired) Lucene blog entry search indexes.

Tomcat preparation:

  1. I installed Java and then Tomcat on my image. It is recommended not to install Tomcat under the root user, but while using a non-root account. The Debian install package results in sudo systemctl [start|stop|restart] tomcat8 command-line commands being available for starting and stopping Tomcat. After starting Tomcat, confirm you can access Tomcat's port 8080 from a browser using your linode's domain name or IP address.
  2. In my ~/.bashrc file, I added the following constants:
    export CATALINA_HOME=/usr/share/tomcat8
    export CATALINA_BASE=/var/lib/tomcat8
    
  3. Create a signed SSL certificate for use with Tomcat. I used Let's Encrypt which generates 3-month certificates, and these instructions for placing the key Let's Encrypt generates in a Java keystore that can be read by Tomcat. My steps from Linode every three months:
    sudo systemctl stop tomcat8
    For housekeeping on key updates, may wish to delete logs at /var/lib/tomcat8/logs 
    cd /opt/letsencrypt
    sudo -H ./letsencrypt-auto certonly --standalone -d glenmazza.net -d www.glenmazza.net
    (see "Congratulations!" feedback indicating Let's Encrypt worked.)
    cd /etc/letsencrypt/live/glenmazza.net
    sudo openssl pkcs12 -export -in cert.pem -inkey privkey.pem -out cert_and_key.p12 -name tomcat -CAfile chain.pem -caname root
    -- The above command will prompt you for a password for the temporary cert_and_key.p12 file.
    -- Choose what you wish but remember for the next command ("abc" in the command below.) 
    -- The next command has placeholders for the Java key and keystore password (both necessary).  Choose what you wish but as I understand
    -- Tomcat expects the two to be the same (can see previous password via sudo more /var/lib/tomcat8/conf/server.xml) 
    sudo keytool -importkeystore -destkeystore MyDSKeyStore.jks -srckeystore cert_and_key.p12 -srcstorepass abc -srcstoretype PKCS12 -alias tomcat -deststorepass <changeit> -destkeypass <changeit>
    sudo cp MyDSKeyStore.jks /var/lib/tomcat8
    sudo systemctl start tomcat8
    ...confirm website accessible again at https://...
    cd /etc/letsencrypt/live
    sudo rm -r glenmazza.net
    

    The Java keystore password you chose above will need to be placed in the tomcat/conf/server.xml file as shown in the next step.

    Note: Ivan Tichy has a blog post on how to automate requesting new certificates from LE every three months and updating Tomcat's keystore with them.)

  4. Update the Tomcat conf/server.xml file to have HTTP running on port 80 and HTTPS on 443:
        <Connector port="80" protocol="HTTP/1.1"
                   connectionTimeout="20000"
                   URIEncoding="UTF-8"
                   redirectPort="443" />
    
        <Connector port="443" protocol="org.apache.coyote.http11.Http11NioProtocol"
                   maxThreads="150" SSLEnabled="true" scheme="https" secure="true"
                   clientAuth="false" sslProtocol="TLS" 
        keystoreFile="MyTomcatKeystore.jks" keystorePass="?????"/>
    

    The keystore file referenced above would need to be placed in Tomcat's root directory, if you use another location be sure to update the keystoreFile value to include the path to the file.

  5. If you're using a non-root user to run Tomcat as suggested, you'll need to make some changes to allow that user to use privileged ports (ports under 1024, port 80 and port 443 in Tomcat's case). This can be done by either firewalls and authbind, I chose the latter. For authbind, first edit the /etc/default/tomcat8 file to activate it and then run a script similar to the following (replace "tomcat8" with the non-root user that is running Tomcat on your linode):
    sudo touch /etc/authbind/byport/80
    sudo chmod 500 /etc/authbind/byport/80
    sudo chown tomcat8 /etc/authbind/byport/80
    sudo touch /etc/authbind/byport/443
    sudo chmod 500 /etc/authbind/byport/443
    sudo chown tomcat8 /etc/authbind/byport/443
    

    An alternative option is to have Tomcat continue to use its default (and non-privileged) 8080 and 8443 ports in its server.xml but use iptable rerouting to redirect those ports to 80 and 443. If you go this route, no authbind configuration is necessary.

  6. Check the README at /usr/share/doc/tomcat8-common/README.Debian for more information, including running with a Java security manager if desired.

MySQL preparation:

  1. Install MySQL on your linode.
  2. As explained in the TightBlog wiki, create the database that will hold the TightBlog data. Be sure to save someplace the MySQL administrator (root) username and password as well as the MySQL user account who will have ownership of the TightBlog database.
  3. (Optional) To connect to the MySQL database on your linode using SquirrelSQL or another SQL client running on your local computer, these instructions will help.

TightBlog deployment:

  1. I built the TightBlog war following these instructions and renamed it to ROOT.war so it will be Tomcat's default application (i.e., have a shorter URL, https://yourdomain.com/ instead of https://yourdomain.com/tightblog). The WAR file will need to be placed in the Tomcat webapps folder as usual.
  2. There are three files that need to be uploaded to the Tomcat /lib folder, as explained on the Deploy to Tomcat page on the wiki: slf4j-api-1.7.25.jar, your JDBC driver (mysql-connector-java-X.X.X-bin.jar for MySQL), and the tightblog-custom.properties file. Create or download these as appropriate.
  3. For uploading files from your computer to your linode, see the scp or sftp commands, for example: scp ROOT.war myaccount@glenmazza.net:~/tbfiles. However, I prefer "sftp glenmazza.net", navigating to desired folders, and using "put" or "get" to upload or download respectively.
  4. After uploading the files and placing them in their proper locations, restart Tomcat and start the TightBlog application install process at https://yourdomain.com[/tightblog].

    Troubleshooting: if accessing https://yourdomain.com[/tightblog] from a browser returns 404's while you can still ping the domain, check to see if you can access that URL from a terminal window that is SSH'ed into your Linode using the command-line Lynx browser. If you can, that would mean Tomcat is running properly but there is most likely a problem with the authbind or iptable rerouting preventing external access. If you can't, Tomcat configuration should be looked at first.

  5. Best to create a test blog entry, and create a database backup and restore process and confirm it is working with your database instance (e.g., add a blog entry after the backup, restore the backup and confirm the new entry disappears, or delete a blog entry after a backup and confirm the restore returns it.) Simple commands for MySQL would be as follows (see here for more details on available commands):
    Export to a file:
    mysqldump -u root -p tightblogdb > db_backup_YYYYMMDD.sql
    Import into the database to restore it:
    mysql -u root tightblogdb < db_backup_YYYYMMDD.sql
    

    Best to save the backup copy outside of the linode (e.g., on your local machine) and create a regular backup routine.

  6. Soon after the blog is up, good to check if the emailing is working by sending yourself a comment for a blog entry. If no email is received, check the tightblog.log in the Tomcat logs folder for any mail sending exceptions. If you're using GMail and there is an authorization problem, the error logs may provide you a precise link at accounts.google.com where you can authorize TightBlog to use the email account.

https://glenmazza.net/blog/date/20170226 Sunday February 26, 2017

Using AppleScript to quickly configure your work environment

At work, I use Mac OS' Script Editor to create and compile AppleScript scripts to quickly configure my desktop depending on the programming task at hand. Each compiled script, or application, I place in the desktop folder so it appears on my desktop and can be activated with a simple double-click.

Three tasks I commonly have that I include and adjust as needed depending on the task:

  • Activating a terminal window with tabs pre-opened to various directories and running various commands. A script that opens up three terminal windows in the specified directories, and optionally runs any commands in those directories, would look as follows (see here for more info):
    tell application "Terminal"
    	activate
    	do script
    	do script "cd /Users/gmazza/mydir1" in tab 1 of front window
    	my makeTab()
    	do script "cd /Users/gmazza/mydir2" in tab 2 of front window
    	my makeTab()
    	do script "cd /Users/gmazza/mydir3" in tab 3 of front window
    end tell
    
    on makeTab()
    	tell application "System Events" to keystroke "t" using {command down}
    	delay 0.2
    end makeTab
    
  • Running IntelliJ IDEA. Simple:
    activate application "IntelliJ IDEA"
    
  • Opening Chrome with a desired number of tabs to certain webpages:
    tell application "Google Chrome"
    	open location "http://www.websiteone.com/onpage"
    	open location "http://www.websitetwo.com/anotherpage"
    	open location "http://www.websitethree.com"
    end tell
    

Script editor has a "run" button allowing me to test the scripts as I develop them. Once done, I save the script both standalone (so I can edit it later if desired), but also export it as an application. Exporting it allows for a simple double-click to directly run the task, rather than bringing up the Script Editor and requiring the script to be run via the "run" button.