Below shows the steps I followed for creating keys and certificates for local development (at https://localhost:port#) of Tomcat- and Webpack DevServer-powered web applications. The process involves creating a local certificate authority (CA) with a self-signed certificate imported into Firefox and Chrome. Then I created a server key and certificate, the latter signed by the CA, to be used by both application servers. This is for work on a Mac OS with LibreSSL 2.6.5 used for the key commands, the process will vary a bit with other OS's or OpenSSL variants.
Before proceeding, there are a couple of shortcuts for working with self-signed certificates for local development, if perhaps you have only a little bit of development to do and can stand the browser unpleasantries during that time. For Firefox, you can choose to ignore the "self-signed cert" warning, with the development pages continually marked as "not secure" as a consequence. Chrome also provides a couple of options (here and here) for the same. Finally, if your motivation in creating a new key is because you've lost the public key and/or cert for a given private key, see this note on how both can be regenerated from that private key.
Create a Certificate Authority whose certificate will be imported into Firefox and Chrome. Although this certificate will be self-signed, the certificate for the server key that will be used by Tomcat and WDS will be signed by this CA. For these steps, I'm using genpkey to generate the private key and req to sign it, with a lifespan of 825 days as that's apparently the max permitted on MacOS.
(For the commands in this entry, using folders of ../certs and ../certs/ca)
openssl genpkey -algorithm RSA -out ca/MyCA.key -pkeyopt rsa_keygen_bits:2048 -aes-256-cbc openssl req -new -sha256 -key ca/MyCA.key -out ca/MyCA.csr openssl x509 -req -sha256 -days 825 -in ca/MyCA.csr -signkey ca/MyCA.key -out ca/MyCA.crt
Notes:
openssl pkey -in MyCA.key -text -noout openssl req -text -in MyCA.csr -noout openssl x509 -text -in MyCA.crt -noout
Import the CA certificate into Firefox and Chrome.
For Firefox, menu item Firefox -> Preferences -> Privacy & Security -> View Certificates button -> Authorities -> Import MyCA.crt, then select "Trust this CA to identify websites." The CA will be listed on the Authorities tab under the Organization name you gave when creating the CSR.
Chrome uses Apple's Keychain Access to store certificates. It can be activated from menu Chrome -> Preferences -> Privacy & Security -> Security Tab -> Manage Certificates. However, I found it clumsy to work with and simpler to use the command line:
sudo security add-trusted-cert -k /Library/Keychains/System.keychain -d ca/MyCA.crt
Once run, you'll find it under the system keychain, "Certificates" category in Keychain Access.
Create the server key in which you specify the domain name(s) applications using the key will be using. First thing to note is that Chrome requires usage of the subjectAltName extension when creating the key, Common Name alone will not work. There are several ways to configure this extension, the simplest I found that would work with my version of LibreSSL was to use an extension file as explained in the OpenSSL cookbook. (Note "TightBlog" refers to my open source project.)
Place in servercert.ext:
subjectAltName = DNS:localhost
Multiple domains can be specified, just make them comma-delimited.
Then run these commands:
openssl genpkey -algorithm RSA -out tightblog.key -pkeyopt rsa_keygen_bits:2048 openssl req -new -sha256 -key tightblog.key -out tightblog.csr openssl x509 -req -in tightblog.csr -CA ca/MyCA.crt -CAkey ca/MyCA.key -CAcreateserial -out tightblog.crt -days 824 -sha256 -extfile servercert.ext
Configure the keys and/or certs on the development servers. For TightBlog development, the application runs on Tomcat, however I use Webpack DevServer while developing the Vue pages, so I have two servers to configure. SSL information for Tomcat is here and for WDS is here.
For Vue, I create a local-certs.js in the same directory as my vue.config.js which contains:
const fs = require("fs"); module.exports = { key: fs .readFileSync("/Users/gmazza/opensource/certs/tightblog.key") .toString(), cert: fs .readFileSync("/Users/gmazza/opensource/certs/tightblog.crt") .toString() };
For Tomcat, I found Jens Grassel's instructions to be useful. He has us create a PKCS #12 key-and-certificate-chain bundle followed by usage of Java keytool to import the bundle into the keystore configured in the Tomcat server.xml file:
openssl pkcs12 -export -in tightblog.crt -inkey tightblog.key -chain -CAfile MyCA.crt -name "MyTomcatCert" -out tightblogForTomcat.p12 keytool -importkeystore -deststorepass changeit -destkeystore /Users/gmazza/.keystore -srckeystore tightblogForTomcat.p12 -srcstoretype PKCS12
For Tomcat, you'll want no more than one alias (here, "MyTomcatCert") in the keystore, or specify the keyAlias in the Tomcat server.xml. The keytool list certs and delete alias commands can help you explore and adjust the Tomcat keystore.
I activated the application in both browsers and checked the URL bar to confirm that the certificates were accepted. For my local development I have the application running on Tomcat at https://localhost:8443/ and the Vue pages running on WDS at https://localhost:8080. Examples showing the Vue URL on Firefox and the Tomcat one on Chrome are as below. Both URLs were accepted by both browsers, but note Firefox does caution that the CA the cert was signed with is not one of the standard CA certs that it ships with.
Posted by Glen Mazza in Programming at 07:00AM May 23, 2021 | Comments[1]
TightBlog 3.7 released just now: (Release Page). This version requires a few database table changes over 3.6, if upgrading be sure to review the database instructions given on the release page. Features updated comment and spam handling processes, as described on the TightBlog Wiki.
Posted by Glen Mazza in Programming at 12:00AM Dec 28, 2019 | Comments[0]
Tom Homberg provided a nice guide for implementing user-feedback validation within Spring applications, quite helpful for me in improving what I had in TightBlog. He creates a field - message Violation object (e.g., {"Name" "Name is required"}), a list of which is wrapped by a ValidationErrorResponse, the latter of which gets serialized to JSON and sent to the client to display validation errors. For my own implementation, I left the field value blank to display errors not specific to a particular field, and used it for both sending 400-type for user-input problems and generic 500-type messages for system errors.
Implementing this validation for TightBlog's blogger UI, I soon found it helpful to have convenience methods for quick creation of the Violations, ValidationErrorResponses and Spring ResponseEntities for providing feedback to the client:
public static ResponseEntity<ValidationErrorResponse> badRequest(String errorMessage) { return badRequest(new Violation(errorMessage)); } public static ResponseEntity<ValidationErrorResponse> badRequest(Violation error) { return badRequest(Collections.singletonList(error)); } public static ResponseEntity<ValidationErrorResponse> badRequest(Listerrors) { return ResponseEntity.badRequest().body(new ValidationErrorResponse(errors)); }
i18n can be handled via the Locale method argument, one of the parameters automatically provided by Spring:
@Autowired private MessageSource messages; @PostMapping(...) public ResponseEntity doFoo(Locale locale) { ... if (error) { return ValidationErrorResponse.badRequest(messages.getMessage("mediaFile.error.duplicateName", null, locale)); } }
On the front-end, I have Angular.js trap the code and then output the error messages (am not presently not using the field names). Below truncated for brevity (full source: JavaScript and JSP):
this.commonErrorResponse = function(response) { self.errorObj = response.data; } <div id="errorMessageDiv" class="alert alert-danger" role="alert" ng-show="ctrl.errorObj.errors" ng-cloak> <button type="button" class="close" data-ng-click="ctrl.errorObj.errors = null" aria-label="Close"> <span aria-hidden="true">×</span> </button> <ul class="list-unstyled"> <li ng-repeat="item in ctrl.errorObj.errors">{{item.message}}</li> </ul> </div>
Appearance:
Additionally, I was able to remove a fair amount of per-endpoint boilerplate by creating a single ExceptionHandler for unexpected 500 response code system errors and attaching it to my ControllerAdvice class so it would be used by all REST endpoints. For these types of exceptions usually a generic "System error occurred, please contact Administrator" message is sent to the user. However, I added a UUID that both appears on the client and goes into the logs along with the exception details, making it easy to search the logs for the specific problem. The exception handler (from the TightBlog source):
@ExceptionHandler(value = Exception.class) // avoiding use of ResponseStatus as it activates Tomcat HTML page (see ResponseStatus JavaDoc) public ResponseEntity<ValidationErrorResponse> handleException(Exception ex, Locale locale) { UUID errorUUID = UUID.randomUUID(); log.error("Internal Server Error (ID: {}) processing REST call", errorUUID, ex); ValidationErrorResponse error = new ValidationErrorResponse(); error.getErrors().add(new Violation(messages.getMessage( "generic.error.check.logs", new Object[] {errorUUID}, locale))); return ResponseEntity.status(500).body(error); }
Screen output:
Log messaging containing the same UUID:
Additional Resources
Posted by Glen Mazza in Programming at 07:00AM Nov 06, 2019 | Comments[0]
Some things learned this past week with ElasticSearch:
Advanced Date Searches: A event search page my company provides for its Pro customers allows for filtering by start date and end date, however some events do not have an end date defined. We decided to have differing business rules on what the start and end dates will filter based on whether or not the event has an end date, specifically:
The above business logic had to be implemented in Java but as an intermediate step I first worked out an ElasticSearch query out of it using Kibana. Creating the query first helps immensely in the subsequent conversion to code. For the ElasticSearch query, this is what I came up with (using arbitrary sample dates to test the queries):
GET events-index/_search { "query": { "bool": { "should" : [ {"bool" : {"must": [ { "exists": { "field": "eventMeta.dateEnd" }}, { "range" : { "eventMeta.dateStart": { "lte": "2018-09-01"}}}, { "range" : { "eventMeta.dateEnd": { "gte": "2018-10-01"}}} ] } }, {"bool" : {"must_not": { "exists": { "field": "eventMeta.dateEnd"}}, "must": [ { "range" : { "eventMeta.dateStart": { "gte": "2018-01-01", "lte": "2019-12-31"}}} ] } } ] } } }
As can be seen above, I first used a nested Bool query to separate the two main cases, namely events with and without and end date. The should at the top-level bool acts as an OR, indicating documents fitting either situation are desired. I then added the additional date requirements that need to hold for each specific case.
With the query now available, mapping to Java code using ElasticSearch's QueryBuilders (API) was very pleasantly straightforward, one can see the roughly 1-to-1 mapping of the code to the above query (the capitalized constants in the code refer to the relevant field names in the documents):
private QueryBuilder createEventDatesFilter(DateFilter filter) { BoolQueryBuilder mainQuery = QueryBuilders.boolQuery(); // query modeled as a "should" (OR), divided by events with and without an end date, // with different filtering rules for each. BoolQueryBuilder hasEndDateBuilder = QueryBuilders.boolQuery(); hasEndDateBuilder.must().add(QueryBuilders.existsQuery(EVENT_END_DATE)); hasEndDateBuilder.must().add(fillDates(EVENT_START_DATE, null, filter.getStop())); hasEndDateBuilder.must().add(fillDates(EVENT_END_DATE, filter.getStart(), null)); mainQuery.should().add(hasEndDateBuilder); BoolQueryBuilder noEndDateBuilder = QueryBuilders.boolQuery(); noEndDateBuilder.mustNot().add(QueryBuilders.existsQuery(EVENT_END_DATE)); noEndDateBuilder.must().add(fillDates(EVENT_START_DATE, filter.getStart(), filter.getStop())); mainQuery.should().add(noEndDateBuilder); return mainQuery; }
Bulk Updates: We use a "sortDate" field to indicate the specific date front ends should use for sorting results (whether ascending or descending, and regardless of the actual source of the date used to populate that field). For our news stories we wanted to rely on the last update date for stories that have been updated since their original publish, the published date otherwise. For certain older records loaded it turned out that the sortDate was still at the publishedDate when it should have been set to the updateDate. For research I used the following query to determine the extent of the problem:
GET news-index/_search { "query": { "bool": { "must": [ { "exists": { "field": "meta.updateDate" }}, { "script": { "script": "doc['meta.dates.sortDate'].value.getMillis() < doc['meta.updateDate'].value.getMillis()" } } ] } } }
For the above query I used a two part Bool query, first checking for a non-null updateDate in the first clause and then a script clause to find sortDates preceding updateDates. (I found I needed to use .getMillis() for the inequality check to work.)
Next, I used ES' Update by Query API to do an all-at-once update of the records. The API has two parts, an optional query element to indicate the documents I wish to have updated (strictly speaking, in ES, to be replaced with a document with the requested changes) and a script element to indicate the modifications I want to have done to those documents. For my case:
POST news-index/_update_by_query { "script": { "source": "ctx._source.meta.dates.sortDate = ctx._source.meta.updateDate", "lang": "painless" }, "query": { "bool": { "must": [ { "exists": { "field": "meta.updateDate" }}, { "script": { "script": "doc['meta.dates.sortDate'].value.getMillis() < doc['meta.updateDate'].value.getMillis()" } } ] } } }
For running your own updates, good to test first by making a do-nothing update in the script (e.g., set sortDate to sortDate) and specifying just one document to be so updated, which can be done by adding a document-specific match requirement to the filter query (e.g., { "match": { "id": "...." }},"
). Kibana should report that just one document was "updated", if so switch to the desired update to confirm that single record was updated properly, and then finally remove the match filter to have all desired documents updated.
Posted by Glen Mazza in Programming at 07:00AM Oct 27, 2018 | Comments[0]
For converting from a Java collection say List<Foo>
to any of several other collections List<Bar1>
, List<Bar2>
, ... rather than create separate FooListToBar1List
, FooListToBar2List
, ... methods a single generic FooListToBarList
method and a series of Foo->Bar1, Foo->Bar2... converter functions can be more succinctly used. The below example converts a highly simplified List of SaleData objects to separate Lists of Customer and Product information, using a common generic saleDataListToItemList(saleDataList, converterFunction)
method along with passed-in converter functions saleDataToCustomer
and saleDataToProduct
. Of particular note is how the converter functions are specified in the saleDataListToItemList
calls. In the case of saleDataToCustomer
, which takes two arguments (the SailData object and a Region string), a lambda expression is used, while the Product converter can be specified as a simple method reference due to it having only one parameter (the SailData object).
import java.util.ArrayList; import java.util.List; import java.util.Optional; import java.util.function.Function; import java.util.stream.Collectors; import java.util.stream.Stream; public class Main { public static void main(String[] args) { ListsaleDataList = new ArrayList<>(); saleDataList.add(new SaleData("Bob", "radio")); saleDataList.add(new SaleData("Sam", "TV")); saleDataList.add(new SaleData("George", "laptop")); List customerList = saleDataListToItemList(saleDataList, sd -> Main.saleDataToCustomerWithRegion(sd, "Texas")); System.out.println("Customers: "); customerList.forEach(System.out::println); List productList = saleDataListToItemList(saleDataList, Main::saleDataToProduct); System.out.println("Products: "); productList.forEach(System.out::println); } private static List saleDataListToItemList(List sdList, Function converter) { // handling potentially null sdList: https://stackoverflow.com/a/43381747/1207540 return Optional.ofNullable(sdList).map(List::stream).orElse(Stream.empty()).map(converter).collect(Collectors.toList()); } private static Product saleDataToProduct(SaleData sd) { return new Product(sd.getProductName()); } private static Customer saleDataToCustomerWithRegion(SaleData sd, String region) { return new Customer(sd.getCustomerName(), region); } private static class SaleData { private String customerName; private String productName; SaleData(String customerName, String productName) { this.customerName = customerName; this.productName = productName; } String getProductName() { return productName; } String getCustomerName() { return customerName; } } private static class Product { private String name; Product(String name) { this.name = name; } @Override public String toString() { return "Product{" + "name='" + name + '\'' + '}'; } } private static class Customer { private String name; private String region; Customer(String name, String region) { this.name = name; this.region = region; } @Override public String toString() { return "Customer{" + "name='" + name + '\'' + ", region='" + region + '\'' + '}'; } } }
Output from running:
Customers: Customer{name='Bob', region='Texas'} Customer{name='Sam', region='Texas'} Customer{name='George', region='Texas'} Products: Product{name='radio'} Product{name='TV'} Product{name='laptop'}
Posted by Glen Mazza in Programming at 07:00AM Oct 07, 2018 | Comments[0]
This tutorial shows how Datadog's API can be used to send custom metrics for a Spring Boot web application and see how the results can be viewed graphically from Datadog dashboards. Samantha Drago's blog post provides a background of Datadog custom metrics which require a paid Datadog account. Note as an alternative not covered here, custom metrics can be defined via JMX with Datadog's JMX Integration used to collect them, this integration in particular provides a list of standard metrics that can be used even with the free DD account.
To facilitate metric accumulation and transferring of metrics to Datadog, Spring Boot's ExportMetricReader and ExportMetricWriter implementations will be used. Every 5 milliseconds by default (adjustable via the spring.metrics.export.delay-millis
property), all MetricReader implementations marked @ExportMetricReader will have their values read and written to @ExportMetricWriter-registered MetricWriters. The class ("exporter") that handles this within Spring Boot is the MetricCopyExporter, which treats metrics starting with a "counter." as a counter (a metric that reports deltas on a continually growing statistic, like web hits) and anything else as a gauge (an standalone snapshot value at a certain timepoint, such as JVM heap usage.) Note, however, Datadog apparently does not support "counter" type metric collection using its API (everything is treated as a gauge), I'll be showing at the end how a summation function can be used within Datadog to work around that.
Spring Boot already provides several web metrics that can be sent to Datadog without any explicit need to capture those metrics, in particular, the metrics listed here that start with "counter." or "gauge.". These provide commonly requested statistics such as number of calls to a website and average response time in milliseconds. The example below will report those statistics to Datadog along with application-specific "counter.foo" and "gauge.bar" metrics that are maintained by our application.
Create the web application. For our sample, Steps #1 and #2 of the Spring Boot to Kubernetes tutorial can be followed for this. Ensure you can see "Hello World!" at localhost:8080 before proceeding.
Modify the Spring Boot application to send metrics to Datadog. Note for tutorial brevity I'm condensing the number of classes that might otherwise be used to send metrics to DD. Additions/updates to make:
In the project build.gradle, the gson JSON library and Apache HTTP Client libraries need to be added to support the API calls to DD:
build.gradle:dependencies { compile('com.google.code.gson:gson:2.8.2') compile('org.apache.httpcomponents:httpclient:4.5.3') ...other libraries... }
The DemoMetricReaderWriter.java needs to be included, it serves as both the reader of our application-specific metrics (not those maintained by Spring Boot--those are handled by BufferMetricReader included within the framework) and as the writer of all metrics (app-specific and Spring Boot) to Datadog. Please see the comments within the code for implementation details.
DemoMetricReaderWriter.java:package com.gmazza.demo; import com.google.gson.Gson; import com.google.gson.GsonBuilder; import com.google.gson.JsonPrimitive; import com.google.gson.JsonSerializer; import org.apache.http.HttpEntity; import org.apache.http.StatusLine; import org.apache.http.client.methods.CloseableHttpResponse; import org.apache.http.client.methods.HttpPost; import org.apache.http.entity.ByteArrayEntity; import org.apache.http.impl.client.CloseableHttpClient; import org.apache.http.impl.client.HttpClients; import org.apache.http.util.EntityUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Value; import org.springframework.boot.actuate.metrics.Metric; import org.springframework.boot.actuate.metrics.reader.MetricReader; import org.springframework.boot.actuate.metrics.writer.Delta; import org.springframework.boot.actuate.metrics.writer.MetricWriter; import org.springframework.stereotype.Component; import javax.annotation.PostConstruct; import java.io.Closeable; import java.io.IOException; import java.math.BigDecimal; import java.util.ArrayList; import java.util.Arrays; import java.util.Date; import java.util.HashMap; import java.util.List; import java.util.Map; @Component public class DemoMetricReaderWriter implements MetricReader, MetricWriter, Closeable { private static final Logger logger = LoggerFactory.getLogger(DemoMetricReaderWriter.class); private Metric<Integer> accessCounter = null; private Map<String, Metric<?>> metricMap = new HashMap<>(); private static final String DATADOG_SERIES_API_URL = "https://app.datadoghq.com/api/v1/series"; @Value("${datadog.api.key}") private String apiKey = null; private CloseableHttpClient httpClient; private Gson gson; @PostConstruct public void init() { httpClient = HttpClients.createDefault(); // removes use of scientific notation, see https://stackoverflow.com/a/18892735 GsonBuilder gsonBuilder = new GsonBuilder(); gsonBuilder.registerTypeAdapter(Double.class, (JsonSerializer<Double>) (src, typeOfSrc, context) -> { BigDecimal value = BigDecimal.valueOf(src); return new JsonPrimitive(value); }); this.gson = gsonBuilder.create(); } @Override public void close() throws IOException { httpClient.close(); } // besides the app-specific metrics defined in the below method, Spring Boot also exports metrics // via its BufferMetricReader, for those with the "counter." or "gauge.*" prefix here: // https://docs.spring.io/spring-boot/docs/current/reference/html/production-ready-metrics.html public void updateMetrics(long barGauge) { // Using same timestamp for both metrics, makes it easier to match/compare if desired in Datadog Date timestamp = new Date(); logger.info("Updating foo-count and bar-gauge of {} for web call", barGauge); // Updates to values involve creating new Metrics as they are immutable // Because this Metric starts with a "counter.", MetricCopyExporter used by Spring Boot will treat this // as a counter and not a gauge when reading/writing values. accessCounter = new Metric<>("counter.foo", accessCounter == null ? 0 : accessCounter.getValue() + 1, timestamp); metricMap.put("counter.foo", accessCounter); // Does not start with "counter.", therefore a gauge to MetricCopyExporter. metricMap.put("gauge.bar", new Metric<>("gauge.bar", barGauge, timestamp)); } // required by MetricReader @Override public Metric<?> findOne(String metricName) { logger.info("Calling findOne with name of {}", metricName); return metricMap.get(metricName); } // required by MetricReader @Override public Iterable<Metric<?>> findAll() { logger.info("Calling findAll(), size of {}", metricMap.size()); return metricMap.values(); } // required by MetricReader @Override public long count() { logger.info("Requesting metricMap size: {}", metricMap.size()); return metricMap.size(); } // required by CounterWriter (in MetricWriter), used only for counters @Override public void increment(Delta<?> delta) { logger.info("Counter being written: {}: {} at {}", delta.getName(), delta.getValue(), delta.getTimestamp()); if (apiKey != null) { sendMetricToDatadog(delta, "counter"); } } // required by CounterWriter (in MetricWriter), but implementation optional (MetricCopyExporter doesn't call) @Override public void reset(String metricName) { // not implemented } // required by GaugeWriter (in MetricWriter), used only for gauges @Override public void set(Metric<?> value) { logger.info("Gauge being written: {}: {} at {}", value.getName(), value.getValue(), value.getTimestamp()); if (apiKey != null) { sendMetricToDatadog(value, "gauge"); } } // API to send metrics to DD is defined here: // https://docs.datadoghq.com/api/?lang=python#post-time-series-points private void sendMetricToDatadog(Metric<?> metric, String metricType) { // let's add an app prefix to our values to distinguish from other apps in DD String dataDogMetricName = "app.glendemo." + metric.getName(); logger.info("Datadog call for metric: {} value: {}", dataDogMetricName, metric.getValue()); Map<String, Object> data = new HashMap<>(); List<List<Object>> points = new ArrayList<>(); List<Object> singleMetric = new ArrayList<>(); singleMetric.add(metric.getTimestamp().getTime() / 1000); singleMetric.add(metric.getValue().longValue()); points.add(singleMetric); // additional metrics could be added to points list providing params below are same for them data.put("metric", dataDogMetricName); data.put("type", metricType); data.put("points", points); // InetAddress.getLocalHost().getHostName() may be accurate for your "host" value. data.put("host", "localhost:8080"); // optional, just adding to test data.put("tags", Arrays.asList("demotag1", "demotag2")); List<Map<String, Object>> series = new ArrayList<>(); series.add(data); Map<String, Object> data2 = new HashMap<>(); data2.put("series", series); try { String urlStr = DATADOG_SERIES_API_URL + "?api_key=" + apiKey; String json = gson.toJson(data2); byte[] jsonBytes = json.getBytes("UTF-8"); HttpPost httpPost = new HttpPost(urlStr); httpPost.addHeader("Content-type", "application/json"); httpPost.setEntity(new ByteArrayEntity(jsonBytes)); try (CloseableHttpResponse response = httpClient.execute(httpPost)) { StatusLine sl = response.getStatusLine(); if (sl != null) { // DD sends 202 (accepted) if it's happy if (sl.getStatusCode() == 202) { HttpEntity responseEntity = response.getEntity(); EntityUtils.consume(responseEntity); } else { logger.warn("Problem posting to Datadog: {} {}", sl.getStatusCode(), sl.getReasonPhrase()); } } else { logger.warn("Problem posting to Datadog: response status line null"); } } } catch (Exception e) { logger.error(e.getMessage(), e); } } }
The DemoApplication.java file needs updating to wire in the DemoMetricReaderWriter. It's "Hello World" endpoint is also updated to send a duration gauge value (similar to but smaller than the more complete gauge.response.root
Spring Boot metric) to the DemoMetricReaderWriter.
package com.gmazza.demo; import org.springframework.boot.SpringApplication; import org.springframework.boot.actuate.autoconfigure.ExportMetricReader; import org.springframework.boot.actuate.autoconfigure.ExportMetricWriter; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.context.annotation.Bean; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RestController; @SpringBootApplication @RestController public class DemoApplication { public static void main(String[] args) { SpringApplication.run(DemoApplication.class, args); } private DemoMetricReaderWriter demoMetricReaderWriter = new DemoMetricReaderWriter(); @Bean @ExportMetricReader @ExportMetricWriter DemoMetricReaderWriter getReader() { return demoMetricReaderWriter; } @RequestMapping("/") String home() throws Exception { long start = System.currentTimeMillis(); // insert up to 2 second delay for a wider range of response times Thread.sleep((long) (Math.random() * 2000)); // let that delay become the gauge.bar metric value long barValue = System.currentTimeMillis() - start; demoMetricReaderWriter.updateMetrics(barValue); return "Hello World!"; } }
The application.properties in your resources folder is where you provide your Datadog API key as well as some other settings. A few other spring.metrics.export.*
settings are also available.
# Just logging will occur if api.key not defined datadog.api.key=your_api_key_here # Datadog can keep per-second metrics, but using every 15 seconds per Datadog's preference spring.metrics.export.delay-millis=15000 # disabling security for this tutorial (don't do in prod), allows seeing all metrics at http://localhost:8080/metrics management.security.enabled=false
Make several web calls to http://localhost:8080 from a browser to send metrics to Datadog. May also wish to access metrics at .../metrics a few times, you'll note the app-specific metrics counter.foo and gauge.bar become listed in the web page that is returned, also that accessing /metrics sends additional *.metrics (counter.status.200.metrics
and gauge.response.metrics
) stats to Datadog. We configured the application in application.properties
to send Datadog metrics every 15 seconds, if running in your IDE, you can check the application logging in the Console window to see the metrics being sent.
Log into Datadog and view the metrics sent. Two main options from the left-side Datadog menu: Metrics -> Explorer and Dashboards -> New Dashboard. For the former, one can search on the metric names in the Graph: field (see upper illustration below), with charts of the data appearing immediately to the right. For the latter (lower illustration), I selected "New Timeboard" and added three Timeseries and one Query Value for the two main Spring Boot and two application-specific metrics sent.
Again, as the "counter" type is presently not supported via the Datadog API, for dashboards the cumulative sum function can be used to have the counter metrics grow over time in charts:
Posted by Glen Mazza in Programming at 07:00AM Feb 18, 2018 | Comments[0]
Provided here are simple instructions for deploying a "Hello World" Spring Boot application to Kubernetes, assuming usage of Amazon Elastic Container Service (ECS) including its Elastic Container Repository (ECR). Not covered are Kubernetes installation as well as proxy server configuration (i.e., accessibility of your application either externally or within an intranet) which would be specific to your environment.
Create the Spring Boot application via the Spring Initializr. I chose a Gradle app with the Web and Actuator dependencies (the latter to obtain a health check /health URL), as shown in the following illustration.
References: Getting Started with Spring Boot / Spring Initializr
Import the Spring Boot application generated by Initializr into your favorite Java IDE and modify the DemoApplication.java to expose a "Hello World" endpoint:
package com.gmazza.demo; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.boot.*; import org.springframework.boot.autoconfigure.*; import org.springframework.stereotype.*; import org.springframework.web.bind.annotation.*; @SpringBootApplication @RestController public class DemoApplication { public static void main(String[] args) { SpringApplication.run(DemoApplication.class, args); } @RequestMapping("/") String home() { return "Hello World!"; } }
Let's make sure the application works standalone. From a command-line window in the Demo root folder, run gradle bootRun
to activate the application. Ensure you can see "Hello World!" from a browser window at localhost:8080 and the health check at localhost:8080/health ({"status":"UP"}") before proceeding.
Create a Docker Image of the Spring Boot application. Steps:
Create a JAR of the demo application: gradle clean build
from the Demo folder will generate a demo-0.0.1-SNAPSHOT.jar in the demo/build/libs folder.
Create a new folder separate from the demo application, any name, say "projdeploy". Copy the demo JAR into this directory and also place there a new file called "Dockerfile" within it having the following code:
FROM openjdk:8u131-jdk-alpine RUN echo "networkaddress.cache.ttl=60" >> /usr/lib/jvm/java-1.8-openjdk/jre/lib/security/java.security ADD demo-0.0.1-SNAPSHOT.jar demo.jar ENTRYPOINT ["java","-Xmx2000m", "-Dfile.encoding=UTF-8","-jar","demo.jar" ]
The above command creates a docker image building off of the OpenJDK image along with a recommended adjustment to the caching TTL. The ADD command performs a rename of the JAR file, stripping off the version from the name for subsequent use in the ENTRYPOINT command.
Next, we'll generate the docker image. From the projdeploy folder, docker build -t demo:0.0.1-SNAPSHOT
. Run the docker images
command to view the created image in your local respository:
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE demo 0.0.1-SNAPSHOT 7139669729bf 10 minutes ago 116MB
Repeated docker build commands with the same repository and tag will just overwrite the previous image. Images can also be deleted using docker rmi -f demo:0.0.1-SNAPSHOT
.
Push the target image to ECR. The ECR documentation provides more thorough instructions. Steps:
Install the AWS Command-Line Interface (AWS CLI). Step #1 of AWS guide gives the OS-specific commands to use. In the aws ecr get-login...
command you may find it necessary to specify the region where your ECR is hosted (e.g., --region us-west-1
). Ensure you can log in from the command line (it will output "Login Succeeded") before continuing.
Create an additional tag for your image to facilitate pushing to ECR, as explained in Step #4 in the ECR w/CLI guide. For this example:
docker tag demo:0.0.1-SNAPSHOT your_aws_account_id.dkr.ecr.your_ecr_region.amazonaws.com/demo:0.0.1-SNAPSHOT
Note in the above command, the "demo" at the end refers to the name of the ECR repository where the image will ultimately be placed, if not already existing it will need to be created beforehand for the next command to be successful or another existing repository name used. Also, see here for determining your account ID. You may wish to run docker images
again to confirm the image was tagged.
Push the newly tagged image to AWS ECR (replacing the "demo" below if you're using another ECR repository):
docker push your_aws_account_id.dkr.ecr.your_ecr_region.amazonaws.com/demo:0.0.1-SNAPSHOT
At this stage, good to confirm that the image was successfully loaded by viewing it in ECR repositories (URL to do so should be https://console.aws.amazon.com/ecs/home?region=your_ecr_region#/repositories.)
Deploy your new application to Kubernetes. Make sure you have kubectl installed locally for this process. Steps:
Create a deployment.yaml for the image. It is in this configuration file that your image's deployment, declare the image to use, and its service and ingress objects. A sample deployment.yaml would be as follows:
deployment.yaml:
kind: Deployment apiVersion: extensions/v1beta1 metadata: name: demo spec: replicas: 1 template: metadata: labels: app: demo spec: containers: - name: demo image: aws_acct_id.dkr.ecr.region.amazonaws.com/demo:0.0.1-SNAPSHOT ports: - containerPort: 80 resources: requests: memory: "500Mi" limits: memory: "1000Mi" readinessProbe: httpGet: scheme: HTTP path: /health port: 8080 initialDelaySeconds: 15 periodSeconds: 5 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 20 livenessProbe: httpGet: scheme: HTTP path: /health port: 8080 initialDelaySeconds: 15 periodSeconds: 15 timeoutSeconds: 10 successThreshold: 1 failureThreshold: 3 --- kind: Service apiVersion: v1 metadata: name: demo spec: selector: app: demo ports: - protocol: TCP port: 80 targetPort: 8080 --- kind: Ingress apiVersion: extensions/v1beta1 metadata: name: demo annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - host: demo.myorganization.org http: paths: - path: backend: serviceName: demo servicePort: 80
Take particular note of the bolded deployment image (must match what was deployed to ECR) and the Ingress loadbalancer host, i.e., the URL to be used to access the application.
Deploy the application onto Kubernetes. The basic kubectl create (deploy) command is as follows:
kubectl --context ??? --namespace ??? create -f deployment.yaml
To determine the correct context and namespace values to use, first enter kubectl config get-contexts
to get a table of current contexts, the values will be under in the second column, "Name". If your desired context is not the current one (first column), enter kubectl config use-context context-name
to switch to that one. Either way, then enter kubectl get namespaces
for a listing of available namespaces under that context, picking one of those or creating a new namespace.
Once your application is created, good to go to the Kubernetes dashboard to confirm it has successfully deployed. In the "pod" section, click the next-to-last column (the one with the horizontal lines) for the deployed pod to see startup logging including error messages, if any.
Determine the IP address of the deployed application to configure routing. The kubectl --context ??? --namespace ??? get ingresses
command (with context and namespace determined as before) will give you a list of configured ingresses and their IP address, configuration of the latter with Route 53 (at a minimum) will probably be needed for accessing your application.
Once the application URL is accessible, you should be able to retrieve the same "Hello World!" and health check responses you had obtained in the first step from running locally.
To undeploy the application, necessary for redeploying it via kubectl create
, the application, service, and ingress can be individually deleted from the Kubernetes Dashboard. As an alternative, the following kubectl commands can be issued to delete the application's deployment, service, and ingress:
kubectl --context ??? --namespace ??? delete deployment demo kubectl --context ??? --namespace ??? delete service demo kubectl --context ??? --namespace ??? delete ingress demo
If it is desired to just reload the current application, deletion of the application's pod by default will accomplish that.
Posted by Glen Mazza in Programming at 06:10AM Feb 11, 2018 | Comments[0]
Steps I followed to deploy TightBlog on Linode:
Linode preparation:
Tomcat preparation:
sudo systemctl [start|stop|restart] tomcat8
command-line commands being available for starting and stopping Tomcat. After starting Tomcat, confirm you can access Tomcat's port 8080 from a browser using your linode's domain name or IP address.export CATALINA_HOME=/usr/share/tomcat8 export CATALINA_BASE=/var/lib/tomcat8
sudo systemctl stop tomcat8 For housekeeping on key updates, may wish to delete logs at /var/log/tomcat8 cd /opt/letsencrypt sudo -H ./letsencrypt-auto certonly --standalone -d glenmazza.net -d www.glenmazza.net (see "Congratulations!" feedback indicating Let's Encrypt worked. Any problem running? Try this) cd /etc/letsencrypt/live/glenmazza.net* sudo openssl pkcs12 -export -in cert.pem -inkey privkey.pem -out cert_and_key.p12 -name tomcat -CAfile chain.pem -caname root -- The above command will prompt you for a password for the temporary cert_and_key.p12 file. -- Choose what you wish but remember for the next command ("abc" in the command below.) -- The next command has placeholders for the Java key and keystore password (both necessary). Choose what you wish but as I understand -- Tomcat expects the two to be the same (can see previous password via sudo more /var/lib/tomcat8/conf/server.xml) sudo keytool -importkeystore -destkeystore MyDSKeyStore.jks -srckeystore cert_and_key.p12 -srcstorepass abc -srcstoretype PKCS12 -alias tomcat -deststorepass <changeit> -destkeypass <changeit> sudo cp MyDSKeyStore.jks /var/lib/tomcat8 sudo systemctl start tomcat8 ...confirm website accessible again at https://..., if not working ensure tomcat dirs all owned by tomcat user & restart cd /etc/letsencrypt/live sudo rm -r glenmazza.net*
The Java keystore password you chose above will need to be placed in the tomcat/conf/server.xml file as shown in the next step.
Note: Ivan Tichy has a blog post on how to automate requesting new certificates from LE every three months and updating Tomcat's keystore with them.)
<Connector port="80" protocol="HTTP/1.1" connectionTimeout="20000" URIEncoding="UTF-8" redirectPort="443" /> <Connector port="443" protocol="org.apache.coyote.http11.Http11NioProtocol" maxThreads="150" SSLEnabled="true" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" keystoreFile="MyTomcatKeystore.jks" keystorePass="?????"/>
The keystore file referenced above would need to be placed in Tomcat's root directory, if you use another location be sure to update the keystoreFile value to include the path to the file.
/etc/default/tomcat8
file to activate it and then run a script similar to the following (replace "tomcat8" with the non-root user that is running Tomcat on your linode):
sudo touch /etc/authbind/byport/80 sudo chmod 500 /etc/authbind/byport/80 sudo chown tomcat8 /etc/authbind/byport/80 sudo touch /etc/authbind/byport/443 sudo chmod 500 /etc/authbind/byport/443 sudo chown tomcat8 /etc/authbind/byport/443
An alternative option is to have Tomcat continue to use its default (and non-privileged) 8080 and 8443 ports in its server.xml but use iptable rerouting to redirect those ports to 80 and 443. If you go this route, no authbind configuration is necessary.
/usr/share/doc/tomcat8-common/README.Debian
for more information, including running with a Java security manager if desired.MySQL preparation:
TightBlog deployment:
https://yourdomain.com/
instead of https://yourdomain.com/tightblog
). The WAR file will need to be placed in the Tomcat webapps folder as usual.tightblog-custom.properties
file. Create or download these as appropriate.scp ROOT.war myaccount@glenmazza.net:~/tbfiles
. However, I prefer "sftp glenmazza.net", navigating to desired folders, and using "put" or "get" to upload or download respectively.https://yourdomain.com[/tightblog]
.
Troubleshooting: if accessing https://yourdomain.com[/tightblog] from a browser returns 404's while you can still ping the domain, check to see if you can access that URL from a terminal window that is SSH'ed into your Linode using the command-line Lynx browser. If you can, that would mean Tomcat is running properly but there is most likely a problem with the authbind or iptable rerouting preventing external access. If you can't, Tomcat configuration should be looked at first.
Export to a file: mysqldump -u root -p tightblogdb > db_backup_YYYYMMDD.sql Import into the database to restore it: mysql -u root tightblogdb < db_backup_YYYYMMDD.sql
Best to save the backup copy outside of the linode (e.g., on your local machine) and create a regular backup routine.
Posted by Glen Mazza in Programming at 07:00AM Aug 20, 2017 | Comments[2]