This entry provides a simple example of using Spring Shell (within Spring Boot) and Micrometer to send custom metrics to Datadog. For a fuller example with Docker and the Datadog Agent, I recommend Datadog Learning Center's free Datadog 101: Developer course. This course will also provide you a free two-week training Datadog account which you can use to receive custom metrics for this example, useful if you don't care to test against your company account. Note custom metrics ordinarily carry billing costs, requiring a paid Datadog account.
Spring Boot already provides numerous web metrics in several areas that can be sent to Datadog without explicit need to capture them. The jvm.*
properties, for example, are readable in Datadog's Metrics Explorer, filtering by statistic:value
for this example in the "from" field. For custom metrics, we'll have the Spring Shell app commands modify a Timer and a Gauge.
Create the Spring Shell application. From Spring Initializr, choose a Java JAR app with Spring Shell and Datadog as dependencies. For some reason I needed to choose the Spring Boot 2.7.x series for an app to download. Prior to running the demo in your IDE (I use IntelliJ), the management.metrics.export.datadog.apiKey=...
value needs to be added to the main/resources/application.properties file. Your API key can be determined by logging into Datadog, and from the bottom of the left-side menu, click on your name, then Organization Settings, then Access, the API Keys.
Create the shell commands to add to the timer and gauge values:
package com.example.demo; import io.micrometer.core.instrument.MeterRegistry; import io.micrometer.core.instrument.Tags; import io.micrometer.core.instrument.Timer; import org.springframework.shell.standard.ShellComponent; import org.springframework.shell.standard.ShellMethod; import java.util.ArrayList; import java.util.List; import java.util.concurrent.TimeUnit; @ShellComponent public class MyCommands { private final Timer timer; private final List<Integer> integerList; public MyCommands(MeterRegistry registry) { timer = registry.timer("demoapp.timer", Tags.empty()); integerList = registry.gauge("demoapp.listsize", Tags.empty(), new ArrayList<>(), List::size); } @ShellMethod("Note a timer event of given duration in seconds") public String timer(int seconds) { timer.record(seconds, TimeUnit.SECONDS); return "Timer event noted"; } @ShellMethod("Add an element to list") public String listAdd() { integerList.add(10); return "List has " + integerList.size() + " elements"; } @ShellMethod("Remove an element from list (if possible)") public String listRemove() { if (integerList.size() > 0) { integerList.remove(integerList.size() - 1); } return "List has " + integerList.size() + " elements"; } }
For the above we're keeping a gauge on the size of the list, and for the timer, we provide the number of seconds that an arbitrary event took.
Run the application and enter several timer #secs
, list-add
, and list-remove
commands. Run -> Run Application from the IntelliJ menu should work fine to enter the commands in the IDE's Console view. To keep the connection with Datadog, keep the command-line app running, even if you're not entering commands. After 2 or 3 minutes, check Datadog's Metrics Explorer to confirm that the demoapp.timer
and demoapp.listsize
metrics are getting received:
A dashboard can be created to show both properties at once (with separate average time and counts given for the Timer):
Resources
Posted by Glen Mazza in Programming at 07:00AM May 20, 2023 | Tags: datadog | Comments[0]
For a Spring Boot application accessing a Flyway-managed MySQL database, I updated its integration tests from using in-memory HSQLDB to Testcontainers' MySQL module. This was both to have testing be done on a database more closely matching deployment environments and also to have various tables pre-populated with standard data provided by our Flyway migration scripts. I'm happy to report that the cost of these benefits was only a slight increase in test running time (perhaps 10-15% more time), and that despite there being 170 Flyway migrations at that time of the conversion.
It is best to use Testcontainers when starting to develop the application, so any hiccups found in a Flyway migration file can be fixed before that migration file becomes final. Switching to testcontainers after-the-fact uncovered problems with some of our migration scripts requiring additional configuration of the MySQL testcontainer. The main problems were unnecessary explicit specification of the schema name in the scripts, and a suboptimal definition of a table that required explicit_defaults_for_timestamp
to be configured in MySQL. This configuration was in our deployment my.cnf
files but not in the default one used by the MySQL testcontainer.
Solving the first issue involved explicit specification of the username, password, and database name when creating the Testcontainers instance. Initializing the MySQL container just once for all integration tests is sufficient for our particular application, so TC's Manual container lifecycle control which uses Java configuration was used:
@SpringBootTest @ActiveProfiles("itest") public class MyAppIntegrationTest { private static final MySQLContainer MY_SQL_CONTAINER; static { MY_SQL_CONTAINER = new MySQLContainer("mysql:5.7") .withUsername("myappUser") .withPassword("myappPass") .withDatabaseName("myapp"); MY_SQL_CONTAINER.start(); } @DynamicPropertySource public static void containersProperties(DynamicPropertyRegistry registry) { registry.add("spring.datasource.username", MY_SQL_CONTAINER::getUsername); registry.add("spring.datasource.password", MY_SQL_CONTAINER::getPassword); registry.add("spring.datasource.url", MY_SQL_CONTAINER::getJdbcUrl); } }
The containersProperties call dynamically populates the given Spring boot config values with those provided from the MySQL testcontainer. Note the spring.datasource.url
will be a "regular" MySQL URL, i.e., Spring Boot and Flyway are unaware that the database is a Testcontainers-provided one.
When doing Java-based instantiation of the MySQL instance as above, I've found adding the standard property file configuration of the MySQL testcontainer to be unnecessary and best avoided -- doing so appeared to create a second, unused, container instance, slowing down builds. However, if you choose to use properties-only Java configuration (perhaps wishing to instantiate and initialize MySQL testcontainers more frequently during the integration tests), configuration similar to the below should work. Note the "myappdatabase", "myuser" and "mypass" values given in the url
property are used to tell Testcontainers the values to set when creating the database and the user. In turn, whatever values placed here should go into the standard Spring username and password properties as shown:
spring.flyway.enabled=true spring.datasource.url=jdbc:tc:mysql:5.7://localhost/myappdatabase?user=myuser&password=mypass spring.datasource.driver-class-name=org.testcontainers.jdbc.ContainerDatabaseDriver spring.datasource.username=myuser spring.datasource.password=mypass
Fixing the second issue involved using a different my.cnf for the MySQL testcontainer. To accomplish that I copied the mysql-default-conf/my.cnf
directory and file from the org.testcontainers.mysql:1.17.6 library (easily viewable from IntelliJ IDEA) and pasted it as src/itest/resources/mysql-default-conf/my.cnf
in the Spring Boot application. From the latter location I added my needed change.
Notes:
The Gradle dependencies used to activate the MySQL testcontainer in the integration test environment:
configurations { integrationTestImplementation.extendsFrom testImplementation integrationTestRuntimeOnly.extendsFrom testRuntimeOnly } dependencies { integrationTestImplementation 'org.springframework.boot:spring-boot-starter-test' integrationTestImplementation 'org.testcontainers:mysql' integrationTestImplementation 'org.testcontainers:junit-jupiter' }
Further resources:
Posted by Glen Mazza in Programming at 07:00AM May 14, 2023 | Comments[0]
Basit-Mahmood Ahmed has provided a nice example of adding a custom grant to Spring Authorization Server, providing a replacement for the "resource owner" grant removed from the OAuth 2.1 standard. I was able to leverage that for providing our own resource owner implementation. At work we've needed to create several types of custom grants, thankfully what started off as perplexing to implement, due to repetition became rather routine. Best starting advice I can to examine the out-of-the-box provided grant types and follow along with them. The Reference Guide of course and YouTube videos are also valuable, for example Getting Started with Spring Authorization Server and Configuring and Extending Spring Authorization Server.
For each custom grant type to support under Spring Auth Server, I've normally found five extra source files needed, as well as adjusting a couple of others. Most classes are limited in responsibilities helping keep their creation straightforward. Providing links to Basit-Mahmood's example where applicable, as well as some unrelated additional code samples:
The custom grant Token class (example): Extending OAuth2AuthorizationGrantAuthenticationToken, this class holds the properties used by the Provider (discussed below) to authenticate the client. For the resource owner grant, it would have username and password. For a grant based on incoming IP Address, it would be an IP address string. This class is also a good place to define the custom token grant_type parameter used in the OAuth2 token request.
The custom grant Converter class (example): This class takes the incoming HttpServletRequest and, by reading its properties, creates an instance of the Token class. Parameter validation for obvious shortcomings (missing param values, etc.) are good to do here, to help keep the Provider uncluttered.
The Provider class (example): This class takes the Token created by the Converter and authenticates and authorizes the grant. In general, there are two parts to this: authentication of the token, frequently handled by two other classes, discussed below, followed by construction of the JWT, partly in the Provider and partly in the OAuth2TokenCustomizer discussed below.
A token to represent the resource owner. This class will extend from AbstractAuthenticationToken, and will be used both to authenticate a user and to represent the user after authentication.
package ...; import org.springframework.security.authentication.AbstractAuthenticationToken; import org.springframework.security.core.GrantedAuthority; ... public class MyInnerAuthenticationToken extends AbstractAuthenticationToken { private final MyAccessUser myAccessUser; // constructor for user-to-validate public MyInnerAuthenticationToken(String propertyToCheck) { super(null); this.myAccessUser = new MyAccessUser(propertyToCheck); } // constructor for validated user public MyInnerAuthenticationToken(MyAccessUser myAccessUser, Collection extends GrantedAuthority> authorities) { super(authorities); this.myAccessUser = myAccessUser; super.setAuthenticated(true); // must use super, as we override } @Override public Object getPrincipal() { return this.myAccessUser; } @Override public void setAuthenticated(boolean isAuthenticated) throws IllegalArgumentException { Assert.isTrue(!isAuthenticated, "Cannot set this token to trusted - use constructor which takes a GrantedAuthority list instead"); super.setAuthenticated(false); } @Override public String getName() { return this.myAccessUser.getName(); } }
Another Provider to authenticate the above token. This would be called by the OAuth2 grant Provider during authentication. This Provider implements the standard authenticate(Authentication) method, returning a new Token populated with the principal and its authorities.
@Service public class MyInnerAuthenticationProvider implements AuthenticationProvider { private static final Logger LOGGER = LoggerFactory.getLogger(MyInnerAuthenticationProvider.class); @Autowired private MyAuthenticator authenticator; @Override public Authentication authenticate(Authentication authentication) throws AuthenticationException { MyAccessUser unvalidatedUser = (MyAccessUser) authentication.getPrincipal(); String ip = unvalidatedUser.getIpAddress(); MyAccessUser validatedAccessUser = authenticator.checkIpAddress(ip); if (validatedIPAccessUser != null) { Collection<GrantedAuthority> authorities = authenticator.toGrantedAuthorities( validatedAccessUser.getPermissions()); return new MyInnerAuthenticationToken(validatedAccessUser, authorities); } else { LOGGER.warn("Could not validate user {}", unvalidatedUser); return null; } } @Override public boolean supports(Class<?> authentication) { return MyInnerAuthenticationToken.class.isAssignableFrom(authentication); } }
Spring Authorization Server allows for creating an OAuth2TokenCustomizer implementation for adding claims to a JWT common to multiple grant types. It should get picked up automatically by the framework's JwtGenerator. If you've created one, good to review at this stage any adjustments or additions that can be made to it as a result of the new custom grant.
@Component public class MyTokenCustomizer implements OAuth2TokenCustomizer{ public void customize(JwtEncodingContext context) { JwtClaimsSet.Builder claimsBuilder = context.getClaims(); claimsBuilder.claim(ENVIRONMENT_ID, environment); // Spring Auth Server's JwtGenerator does not provide JTI by default claimsBuilder.claim(JwtClaimNames.JTI, UUID.randomUUID().toString()); Authentication token = context.getPrincipal(); // can add principal-specific claims: if (token.getPrincipal() instanceof MySubjectClass chiefJwtSubject) { ... } } }
Once completed, now time to wire new grant support within the authorization server. To wire up the grant-level Converter and Provider, within a WebSecurityConfigurerAdapter subclass:
Listconverters = new ArrayList<>(); converters.add(resourceOwnerPasswordAuthenticationConverter); authorizationServerConfigurer .tokenEndpoint(tokenEndpoint -> { tokenEndpoint.accessTokenRequestConverter(new DelegatingAuthenticationConverter( converters)) // lots more providers .authenticationProvider(resourceOwnerPasswordAuthenticationProvider) } );
The mini-level Provider, used for the actual authentication of the User, can be configured separately as a @Bean:
@Bean public MyInnerAuthenticationProvider myInnerAuthenticationProvider() { return new MyInnerAuthenticationProvider(); }
Once developed, easy to test with Postman. Spring Auth Server uses an oauth2_registered_client table where the client_id and client_secret for clients are defined. Within Postman, Authorization tab, choose Basic Auth type and enter the client ID and secret as the credentials:
Then the new grant type can be tested with a POST call to the standard oauth/token endpoint using that grant_type:
Posted by Glen Mazza in Programming at 07:00AM May 05, 2023 | Comments[0]
Below shows the steps I followed for creating keys and certificates for local development (at https://localhost:port#) of Tomcat- and Webpack DevServer-powered web applications. The process involves creating a local certificate authority (CA) with a self-signed certificate imported into Firefox and Chrome. Then I created a server key and certificate, the latter signed by the CA, to be used by both application servers. This is for work on a Mac OS with LibreSSL 2.6.5 used for the key commands, the process will vary a bit with other OS's or OpenSSL variants.
Before proceeding, there are a couple of shortcuts for working with self-signed certificates for local development, if perhaps you have only a little bit of development to do and can stand the browser unpleasantries during that time. For Firefox, you can choose to ignore the "self-signed cert" warning, with the development pages continually marked as "not secure" as a consequence. Chrome also provides a couple of options (here and here) for the same. Finally, if your motivation in creating a new key is because you've lost the public key and/or cert for a given private key, see this note on how both can be regenerated from that private key.
Create a Certificate Authority whose certificate will be imported into Firefox and Chrome. Although this certificate will be self-signed, the certificate for the server key that will be used by Tomcat and WDS will be signed by this CA. For these steps, I'm using genpkey to generate the private key and req to sign it, with a lifespan of 825 days as that's apparently the max permitted on MacOS.
(For the commands in this entry, using folders of ../certs and ../certs/ca)
openssl genpkey -algorithm RSA -out ca/MyCA.key -pkeyopt rsa_keygen_bits:2048 -aes-256-cbc openssl req -new -sha256 -key ca/MyCA.key -out ca/MyCA.csr openssl x509 -req -sha256 -days 825 -in ca/MyCA.csr -signkey ca/MyCA.key -out ca/MyCA.crt
Notes:
openssl pkey -in MyCA.key -text -noout openssl req -text -in MyCA.csr -noout openssl x509 -text -in MyCA.crt -noout
Import the CA certificate into Firefox and Chrome.
For Firefox, menu item Firefox -> Preferences -> Privacy & Security -> View Certificates button -> Authorities -> Import MyCA.crt, then select "Trust this CA to identify websites." The CA will be listed on the Authorities tab under the Organization name you gave when creating the CSR.
Chrome uses Apple's Keychain Access to store certificates. It can be activated from menu Chrome -> Preferences -> Privacy & Security -> Security Tab -> Manage Certificates. However, I found it clumsy to work with and simpler to use the command line:
sudo security add-trusted-cert -k /Library/Keychains/System.keychain -d ca/MyCA.crt
Once run, you'll find it under the system keychain, "Certificates" category in Keychain Access.
Create the server key in which you specify the domain name(s) applications using the key will be using. First thing to note is that Chrome requires usage of the subjectAltName extension when creating the key, Common Name alone will not work. There are several ways to configure this extension, the simplest I found that would work with my version of LibreSSL was to use an extension file as explained in the OpenSSL cookbook. (Note "TightBlog" refers to my open source project.)
Place in servercert.ext:
subjectAltName = DNS:localhost
Multiple domains can be specified, just make them comma-delimited.
Then run these commands:
openssl genpkey -algorithm RSA -out tightblog.key -pkeyopt rsa_keygen_bits:2048 openssl req -new -sha256 -key tightblog.key -out tightblog.csr openssl x509 -req -in tightblog.csr -CA ca/MyCA.crt -CAkey ca/MyCA.key -CAcreateserial -out tightblog.crt -days 824 -sha256 -extfile servercert.ext
Configure the keys and/or certs on the development servers. For TightBlog development, the application runs on Tomcat, however I use Webpack DevServer while developing the Vue pages, so I have two servers to configure. SSL information for Tomcat is here and for WDS is here.
For Vue, I create a local-certs.js in the same directory as my vue.config.js which contains:
const fs = require("fs"); module.exports = { key: fs .readFileSync("/Users/gmazza/opensource/certs/tightblog.key") .toString(), cert: fs .readFileSync("/Users/gmazza/opensource/certs/tightblog.crt") .toString() };
For Tomcat, I found Jens Grassel's instructions to be useful. He has us create a PKCS #12 key-and-certificate-chain bundle followed by usage of Java keytool to import the bundle into the keystore configured in the Tomcat server.xml file:
openssl pkcs12 -export -in tightblog.crt -inkey tightblog.key -chain -CAfile MyCA.crt -name "MyTomcatCert" -out tightblogForTomcat.p12 keytool -importkeystore -deststorepass changeit -destkeystore /Users/gmazza/.keystore -srckeystore tightblogForTomcat.p12 -srcstoretype PKCS12
For Tomcat, you'll want no more than one alias (here, "MyTomcatCert") in the keystore, or specify the keyAlias in the Tomcat server.xml. The keytool list certs and delete alias commands can help you explore and adjust the Tomcat keystore.
I activated the application in both browsers and checked the URL bar to confirm that the certificates were accepted. For my local development I have the application running on Tomcat at https://localhost:8443/ and the Vue pages running on WDS at https://localhost:8080. Examples showing the Vue URL on Firefox and the Tomcat one on Chrome are as below. Both URLs were accepted by both browsers, but note Firefox does caution that the CA the cert was signed with is not one of the standard CA certs that it ships with.
Posted by Glen Mazza in Programming at 07:00AM May 23, 2021 | Comments[1]
TightBlog 3.7 released just now: (Release Page). This version requires a few database table changes over 3.6, if upgrading be sure to review the database instructions given on the release page. Features updated comment and spam handling processes, as described on the TightBlog Wiki.
Posted by Glen Mazza in Programming at 12:00AM Dec 28, 2019 | Comments[0]
Tom Homberg provided a nice guide for implementing user-feedback validation within Spring applications, quite helpful for me in improving what I had in TightBlog. He creates a field - message Violation object (e.g., {"Name" "Name is required"}), a list of which is wrapped by a ValidationErrorResponse, the latter of which gets serialized to JSON and sent to the client to display validation errors. For my own implementation, I left the field value blank to display errors not specific to a particular field, and used it for both sending 400-type for user-input problems and generic 500-type messages for system errors.
Implementing this validation for TightBlog's blogger UI, I soon found it helpful to have convenience methods for quick creation of the Violations, ValidationErrorResponses and Spring ResponseEntities for providing feedback to the client:
public static ResponseEntity<ValidationErrorResponse> badRequest(String errorMessage) { return badRequest(new Violation(errorMessage)); } public static ResponseEntity<ValidationErrorResponse> badRequest(Violation error) { return badRequest(Collections.singletonList(error)); } public static ResponseEntity<ValidationErrorResponse> badRequest(Listerrors) { return ResponseEntity.badRequest().body(new ValidationErrorResponse(errors)); }
i18n can be handled via the Locale method argument, one of the parameters automatically provided by Spring:
@Autowired private MessageSource messages; @PostMapping(...) public ResponseEntity doFoo(Locale locale) { ... if (error) { return ValidationErrorResponse.badRequest(messages.getMessage("mediaFile.error.duplicateName", null, locale)); } }
On the front-end, I have Angular.js trap the code and then output the error messages (am not presently not using the field names). Below truncated for brevity (full source: JavaScript and JSP):
this.commonErrorResponse = function(response) { self.errorObj = response.data; } <div id="errorMessageDiv" class="alert alert-danger" role="alert" ng-show="ctrl.errorObj.errors" ng-cloak> <button type="button" class="close" data-ng-click="ctrl.errorObj.errors = null" aria-label="Close"> <span aria-hidden="true">×</span> </button> <ul class="list-unstyled"> <li ng-repeat="item in ctrl.errorObj.errors">{{item.message}}</li> </ul> </div>
Appearance:
Additionally, I was able to remove a fair amount of per-endpoint boilerplate by creating a single ExceptionHandler for unexpected 500 response code system errors and attaching it to my ControllerAdvice class so it would be used by all REST endpoints. For these types of exceptions usually a generic "System error occurred, please contact Administrator" message is sent to the user. However, I added a UUID that both appears on the client and goes into the logs along with the exception details, making it easy to search the logs for the specific problem. The exception handler (from the TightBlog source):
@ExceptionHandler(value = Exception.class) // avoiding use of ResponseStatus as it activates Tomcat HTML page (see ResponseStatus JavaDoc) public ResponseEntity<ValidationErrorResponse> handleException(Exception ex, Locale locale) { UUID errorUUID = UUID.randomUUID(); log.error("Internal Server Error (ID: {}) processing REST call", errorUUID, ex); ValidationErrorResponse error = new ValidationErrorResponse(); error.getErrors().add(new Violation(messages.getMessage( "generic.error.check.logs", new Object[] {errorUUID}, locale))); return ResponseEntity.status(500).body(error); }
Screen output:
Log messaging containing the same UUID:
Additional Resources
Posted by Glen Mazza in Programming at 07:00AM Nov 06, 2019 | Comments[0]
Some things learned this past week with ElasticSearch:
Advanced Date Searches: A event search page my company provides for its Pro customers allows for filtering by start date and end date, however some events do not have an end date defined. We decided to have differing business rules on what the start and end dates will filter based on whether or not the event has an end date, specifically:
The above business logic had to be implemented in Java but as an intermediate step I first worked out an ElasticSearch query out of it using Kibana. Creating the query first helps immensely in the subsequent conversion to code. For the ElasticSearch query, this is what I came up with (using arbitrary sample dates to test the queries):
GET events-index/_search { "query": { "bool": { "should" : [ {"bool" : {"must": [ { "exists": { "field": "eventMeta.dateEnd" }}, { "range" : { "eventMeta.dateStart": { "lte": "2018-09-01"}}}, { "range" : { "eventMeta.dateEnd": { "gte": "2018-10-01"}}} ] } }, {"bool" : {"must_not": { "exists": { "field": "eventMeta.dateEnd"}}, "must": [ { "range" : { "eventMeta.dateStart": { "gte": "2018-01-01", "lte": "2019-12-31"}}} ] } } ] } } }
As can be seen above, I first used a nested Bool query to separate the two main cases, namely events with and without and end date. The should at the top-level bool acts as an OR, indicating documents fitting either situation are desired. I then added the additional date requirements that need to hold for each specific case.
With the query now available, mapping to Java code using ElasticSearch's QueryBuilders (API) was very pleasantly straightforward, one can see the roughly 1-to-1 mapping of the code to the above query (the capitalized constants in the code refer to the relevant field names in the documents):
private QueryBuilder createEventDatesFilter(DateFilter filter) { BoolQueryBuilder mainQuery = QueryBuilders.boolQuery(); // query modeled as a "should" (OR), divided by events with and without an end date, // with different filtering rules for each. BoolQueryBuilder hasEndDateBuilder = QueryBuilders.boolQuery(); hasEndDateBuilder.must().add(QueryBuilders.existsQuery(EVENT_END_DATE)); hasEndDateBuilder.must().add(fillDates(EVENT_START_DATE, null, filter.getStop())); hasEndDateBuilder.must().add(fillDates(EVENT_END_DATE, filter.getStart(), null)); mainQuery.should().add(hasEndDateBuilder); BoolQueryBuilder noEndDateBuilder = QueryBuilders.boolQuery(); noEndDateBuilder.mustNot().add(QueryBuilders.existsQuery(EVENT_END_DATE)); noEndDateBuilder.must().add(fillDates(EVENT_START_DATE, filter.getStart(), filter.getStop())); mainQuery.should().add(noEndDateBuilder); return mainQuery; }
Bulk Updates: We use a "sortDate" field to indicate the specific date front ends should use for sorting results (whether ascending or descending, and regardless of the actual source of the date used to populate that field). For our news stories we wanted to rely on the last update date for stories that have been updated since their original publish, the published date otherwise. For certain older records loaded it turned out that the sortDate was still at the publishedDate when it should have been set to the updateDate. For research I used the following query to determine the extent of the problem:
GET news-index/_search { "query": { "bool": { "must": [ { "exists": { "field": "meta.updateDate" }}, { "script": { "script": "doc['meta.dates.sortDate'].value.getMillis() < doc['meta.updateDate'].value.getMillis()" } } ] } } }
For the above query I used a two part Bool query, first checking for a non-null updateDate in the first clause and then a script clause to find sortDates preceding updateDates. (I found I needed to use .getMillis() for the inequality check to work.)
Next, I used ES' Update by Query API to do an all-at-once update of the records. The API has two parts, an optional query element to indicate the documents I wish to have updated (strictly speaking, in ES, to be replaced with a document with the requested changes) and a script element to indicate the modifications I want to have done to those documents. For my case:
POST news-index/_update_by_query { "script": { "source": "ctx._source.meta.dates.sortDate = ctx._source.meta.updateDate", "lang": "painless" }, "query": { "bool": { "must": [ { "exists": { "field": "meta.updateDate" }}, { "script": { "script": "doc['meta.dates.sortDate'].value.getMillis() < doc['meta.updateDate'].value.getMillis()" } } ] } } }
For running your own updates, good to test first by making a do-nothing update in the script (e.g., set sortDate to sortDate) and specifying just one document to be so updated, which can be done by adding a document-specific match requirement to the filter query (e.g., { "match": { "id": "...." }},"
). Kibana should report that just one document was "updated", if so switch to the desired update to confirm that single record was updated properly, and then finally remove the match filter to have all desired documents updated.
Posted by Glen Mazza in Programming at 07:00AM Oct 27, 2018 | Comments[0]
For converting from a Java collection say List<Foo>
to any of several other collections List<Bar1>
, List<Bar2>
, ... rather than create separate FooListToBar1List
, FooListToBar2List
, ... methods a single generic FooListToBarList
method and a series of Foo->Bar1, Foo->Bar2... converter functions can be more succinctly used. The below example converts a highly simplified List of SaleData objects to separate Lists of Customer and Product information, using a common generic saleDataListToItemList(saleDataList, converterFunction)
method along with passed-in converter functions saleDataToCustomer
and saleDataToProduct
. Of particular note is how the converter functions are specified in the saleDataListToItemList
calls. In the case of saleDataToCustomer
, which takes two arguments (the SailData object and a Region string), a lambda expression is used, while the Product converter can be specified as a simple method reference due to it having only one parameter (the SailData object).
import java.util.ArrayList; import java.util.List; import java.util.Optional; import java.util.function.Function; import java.util.stream.Collectors; import java.util.stream.Stream; public class Main { public static void main(String[] args) { List<SaleData> saleDataList = new ArrayList<>(); saleDataList.add(new SaleData("Bob", "radio")); saleDataList.add(new SaleData("Sam", "TV")); saleDataList.add(new SaleData("George", "laptop")); List<Customer> customerList = saleDataListToItemList(saleDataList, sd -> Main.saleDataToCustomerWithRegion(sd, "Texas")); System.out.println("Customers: "); customerList.forEach(System.out::println); List<Product> productList = saleDataListToItemList(saleDataList, Main::saleDataToProduct); System.out.println("Products: "); productList.forEach(System.out::println); } private static <T> List<T> saleDataListToItemList(List<SaleData> sdList, Function<SaleData, T> converter) { // handling potentially null sdList: https://stackoverflow.com/a/43381747/1207540 return Optional.ofNullable(sdList).map(List::stream).orElse(Stream.empty()).map(converter).collect(Collectors.toList()); } private static Product saleDataToProduct(SaleData sd) { return new Product(sd.getProductName()); } private static Customer saleDataToCustomerWithRegion(SaleData sd, String region) { return new Customer(sd.getCustomerName(), region); } private static class SaleData { private String customerName; private String productName; SaleData(String customerName, String productName) { this.customerName = customerName; this.productName = productName; } String getProductName() { return productName; } String getCustomerName() { return customerName; } } private static class Product { private String name; Product(String name) { this.name = name; } @Override public String toString() { return "Product{" + "name='" + name + '\'' + '}'; } } private static class Customer { private String name; private String region; Customer(String name, String region) { this.name = name; this.region = region; } @Override public String toString() { return "Customer{" + "name='" + name + '\'' + ", region='" + region + '\'' + '}'; } } }
Output from running:
Customers: Customer{name='Bob', region='Texas'} Customer{name='Sam', region='Texas'} Customer{name='George', region='Texas'} Products: Product{name='radio'} Product{name='TV'} Product{name='laptop'}
Posted by Glen Mazza in Programming at 07:00AM Oct 07, 2018 | Comments[0]