• Reactive programming c#. See what "Reactive programming" is in other dictionaries. Main components of ReactiveCocoa

    The world of OOP development in general and the Java language in particular live a very active life. There are fashion trends here, and today we’ll look at one of the main trends of the season - the ReactiveX framework. If you're still on the fence about this wave, I promise you'll love it! It's definitely better than high-waisted jeans :).

    Reactive Programming

    As soon as OOP languages ​​reached mass adoption, developers realized how sometimes the capabilities of C-like languages ​​were lacking. Since writing code in the functional programming style seriously destroys the quality of OOP code, and therefore the maintainability of the project, a hybrid was invented - reactive programming.

    The reactive development paradigm is based on the idea of ​​constantly monitoring changes in the state of an object. If such changes have occurred, then all interested objects should receive the updated data and work only with them, forgetting about the old ones.

    A good example of a reactive programming idea is an Excel spreadsheet. If you link multiple cells with one formula, the result of the calculation will change every time the data in those cells changes. For accounting, such dynamic changes in data are commonplace, but for programmers this is rather an exception.

    A=3; b=4; c=a + b; F1(c); a=1; F2(c);

    In this example, functions F1 and F2 will work with different values ​​of the variable C. Often it is required that both functions have only the most current data - reactive programming will allow you to immediately call F1 with new parameters without changing the logic of the functions themselves. This code structure gives the application the ability to instantly respond to any changes, which makes it fast, flexible and responsive.

    ReactiveX

    Implementing reactive programming ideas from scratch can be quite troublesome - there are pitfalls, and it will take a lot of time. Therefore, for many developers this paradigm remained only theoretical material until ReactiveX appeared.

    The ReactiveX framework is a reactive programming tool that works with all popular OOP languages. The creators themselves call it a multi-platform API for asynchronous development, based on the Observer pattern.

    If the term “reactive programming” is a kind of theoretical model, then the “Observer” pattern is a ready-made mechanism for tracking changes in a program. And you have to track them quite often: downloading and updating data, notifications about events, and so on.

    The Observer pattern has been around for about as long as OOP itself. An object whose state can change is called a publisher (a popular translation of the term Observable). All other participants who are interested in these changes are subscribers (Observer, Subscriber). To receive notifications, subscribers register with the publisher by explicitly providing their ID. The publisher from time to time generates notifications that are sent to the list of registered subscribers.

    Actually, the creators of ReactiveX didn’t come up with anything revolutionary, they just conveniently implemented the pattern. And although many OOP languages, and Java in particular, have ready-made implementations of the pattern, this framework contains additional “tuning” that turns the Observer into a very powerful tool.

    RxAndroid

    The port of the ReactiveX library for the Android world is called rxAndroid and is connected, as always, via Gradle.

    Compile "io.reactivex:rxandroid:1.1.0"

    The publisher that generates notifications is specified here using the Observable class. The publisher can have several subscribers; to implement them, we will use the Subscriber class. The standard behavior for an Observable is to issue one or more messages to subscribers and then exit or issue an error message. Messages can be either variables or entire objects.

    Rx.Observable myObserv = rx.Observable.create(new rx.Observable.OnSubscribe () ( @Override public void call(Subscribersubscriber) ( subscriber.onNext("Hello"); subscriber.onNext("world"); subscriber.onCompleted(); ) ));

    In this case, the myObserv publisher will first send the strings hello and message, and then a success message. The publisher can call the onNext() , onCompleted() and onEror() methods, so the subscribers must have them defined.

    Subscriber mySub = new Subscriber () (... @Override public void onNext(String value) (Log.e("got data", " " + value);) );

    Everything is ready to go. All that remains is to connect the objects with each other - and “Hello, world!” ready for reactive programming!

    MyObserv.subscribe(mySub);

    I must say that this was a very simple example. ReactiveX has many options for the behavior of all pattern participants: filtering, grouping, error handling. The benefits of reactive programming can only be felt by trying it in action. Let's move on to a more serious task.

    Continuation is available only to members

    Option 1. Join the “site” community to read all materials on the site

    Membership in the community within the specified period will give you access to ALL Hacker materials, increase your personal cumulative discount and allow you to accumulate a professional Xakep Score rating!

    Over time, programming languages ​​constantly change and evolve due to new technologies, modern requirements, or a simple desire to refresh the style of coding. Reactive programming can be implemented using various frameworks such as Reactive Cocoa. It changes the imperative style of the Objective-C language and this approach to programming has something to offer to the standard paradigm. This, of course, attracts the attention of iOS developers.

    ReactiveCocoa brings declarative style to Objective-C. What do we mean by this? The traditional imperative style, which is used by languages ​​such as: C, C++, Objective-C, and Java, etc., can be described as follows: You write directives for a computer program that must be executed in a certain way. In other words, you say "how to do" something. While declarative programming allows you to describe the flow of control as a sequence of actions, “what to do,” without defining “how to do it.”

    Imperative vs Functional programming

    The imperative approach to programming involves describing in detail each step that a computer must take to complete tasks. In fact, the imperative style is used in native programming languages ​​(or used when writing machine code). This, by the way, is a characteristic feature of most programming languages.

    In contrast, the functional approach solves problems using a set of functions that need to be performed. You define the input parameters for each function, and what each function returns. These two programming approaches are very different.

    Here are the main differences between languages:

    1. State changes

    For pure functional programming, state change does not exist because there are no side effects. A side effect involves changes in state in addition to the return value due to some external interaction. RP (Referential Transparency) of a subexpression is often defined as "no side effects" and is primarily related to pure functions. SP does not allow function execution to have external access to the function's volatile state, because every subexpression is a function call by definition.

    To clarify the point, pure functions have the following attributes:

    • the only noticeable output is the return value
    • the only dependence of the input parameters is the arguments
    • arguments are fully specified before any output is generated

    Despite the fact that the functional approach minimizes side effects, they cannot be completely avoided since they are an inherent part of any development.

    On the other hand, functions in imperative programming do not have referential transparency, and this may be the only difference between the declarative approach and the imperative one. Side effects are widely used to implement state and I/O. Commands in the source language can change state, resulting in different values ​​for the same language expression.

    How about ReactiveCocoa? This is a functional framework for Objective-C, which is a conceptually imperative language without explicitly including pure functions. When trying to avoid a change in condition, there are no limitations to the side effects.

    2. First class facilities

    In functional programming, there are objects and functions, which are first class objects. What does it mean? This means that functions can be passed as a parameter, assigned to a variable, or returned from a function. Why is this convenient? This makes it easy to manage execution blocks, create and combine functions in a variety of ways without complications such as function pointers (char *(*(**foo)()); - have fun!).

    Languages ​​that use the imperative approach have their own peculiarities regarding first-class expressions. What about Objective-C? It has blocks as closure implementations. Higher order functions (HOFs) can be modeled by taking blocks as parameters. In this case, the block is a closure and a higher order function can be created from a specific set of blocks.

    However, the process of manipulating FVP in functional languages ​​is faster and requires fewer lines of code.

    3. Main flow control

    Imperative style loops are represented as recursion function calls in functional programming. Iteration in functional languages ​​is usually done through recursion. Why? Probably for the sake of complexity. For Objective-C developers, loops seem much more programmer-friendly. Recursions can cause problems, such as excessive RAM consumption.

    But! We can write a function without using loops or recursion. For each of the infinitely possible specialized actions that can be applied to each element of a collection, functional programming uses reusable iterative functions such as “ map”, “fold”, “" These functions are useful for refactoring source code. They reduce duplication and do not require writing a separate function. (read on, we have more information about this!)

    4. Order of execution

    Declarative expressions show only the logical relationships between the arguments of the subexpression function and the constant state relationship. So, in the absence of side effects, the state transition of each function call occurs independently of the others.

    The functional order of execution of imperative expressions depends on the non-persistent state. Therefore, the order of execution matters and is implicitly determined by the organization of the source code. In this matter, we can point out the difference between the evaluation strategies of both approaches.

    Lazy evaluation, or on-demand evaluation, is a strategy in functional programming languages. In this case, the evaluation of the expression is delayed until its value is needed, thereby avoiding repeated evaluations. In other words, expressions are evaluated only when the dependent expression is evaluated. The order of operations becomes uncertain.

    In contrast, energetic evaluation in an imperative language means that the expression will be evaluated as soon as it is bound to a variable. This implies dictating the order of execution. Thus, it is easier to determine when subexpressions (including functions) will be evaluated, because subexpressions can have side effects that affect the evaluation of other expressions.

    5. Number of code

    This is important; the functional approach requires writing less code than the imperative approach. This means fewer crashes, less code to test, and a more productive development cycle. Since the system is constantly evolving and growing, this is important.

    Main components of ReactiveCocoa

    Functional programming works with concepts known as future (read-only representation of a variable) and promise (read-only representation of a variable future). What's good about them? In imperative programming, you have to work with pre-existing values, which leads to the need to synchronize asynchronous code and other difficulties. But the concepts of futures and promises allow you to work with values ​​that have not yet been created (asynchronous code is written in a synchronous way).


    Signal

    Future and promise are represented as signals in reactive programming. - the main component of ReactiveCocoa. It makes it possible to imagine the flow of events that will be presented in the future. You subscribe to a signal and get access to events that will happen over time. A signal is a push-driven flow and can represent a button click, asynchronous network operations, timers, other UI events, or anything else that changes over time. They can bind the results of asynchronous operations and efficiently combine multiple event sources.

    Subsequence

    Another type of flow is a sequence. Unlike a signal, a sequence is a pull-driven flow. This is a kind of collection that has a similar purpose to NSArray. RACSequence allows certain operations to be performed when you need them, rather than sequentially as with a collection NSArray. Values ​​in a sequence are only evaluated when specified by default. Using only part of the sequence potentially improves performance. RACSequence allows Cocoa collections to be processed in a generic and declarative way. RAC adds the -rac_sequence method to most Cocoa collection classes so that they can be used as RACSequences.

    Team

    Created in response to certain actions RACCcommand and subscribes to the signal. This applies primarily to UI interactions. Categories UIKit provided by ReactiveCocoa for most controls UIKit, give us a correct way to handle UI events. Let's imagine that we have to register a user in response to a button click. In this case, the command may represent a network request. When the process starts executing, the button changes its state to “inactive” and vice versa. What else? We can send an active signal in a command (Reachability is a good example). Therefore, if the server is unavailable (which is our “on signal”), then the command will be unavailable, and every command of the associated control will reflect this state.

    Examples of basic operations

    Here are some diagrams of how basic operations with RACSignals work:

    Merge

    + (RACSignal *)merge:(id )signals;


    Result streams have both event streams combined together. So "+merge" is useful when you don't care about the specific event source but would like to process them in one place. In our example, stateLabel.text uses 3 different signals: progress, completion, errors.

    RACCommand *loginCommand = [ initWithSignalBlock:^RACSignal *(id input) ( // let's login! )]; RACSignal *executionSignal = ; RACSignal *completionSignal = filter:^BOOL(RACEvent *event) ( return event.eventType == RACEventTypeCompleted; )] map:^id(id value) ( ​​return @"Done"; )]; RACSignal *errorSignal = ;

    + (RACSignal *)combineLatest:(id )signals reduce:(id (^)())reduceBlock;

    As a result, the stream contains the latest values ​​of the transmitted streams. If one of the streams does not have a value, then the result will be empty.


    When can we use it? Let's take our previous example and add more logic to it. It's useful to only enable the login button when the user has entered the correct email and password, right? We can declare this rule like this:

    ACSignal *enabledSignal = reduce:^id (NSString *email, NSString *password) ( return @( && password.length > 3); )];

    *Now let's change our login command a little and connect it to the actual loginButton

    RACCommand *loginCommand = [ initWithEnabled:enabledSignal signalBlock:^RACSignal *(id input) ( // let's login! )]; ;

    - (RACSignal *)flattenMap:(RACStream * (^)(id value))block;

    You create new streams for each value in the original stream using the given function (f). The result stream returns new signals based on the values ​​generated in the original streams. Therefore it can be asynchronous.


    Let's imagine that your authorization request to the system consists of two separate parts: receiving data from Facebook (identifier, etc.) and passing it to the Backend. One of the requirements is to be able to cancel login. Therefore, client code must handle the state of the login process to be able to cancel it. This results in a lot of boilerplate code, especially if you can log in from multiple places.

    How does ReactiveCocoa help you? This could be a login implementation:

    - (RACSignal *)authorizeUsingFacebook ( return [[ flattenMap:^RACStream *(FBSession *session) ( return ; )] flattenMap:^RACStream *(NSDictionary *profile) ( return ; )]; )

    Legend:

    + - a signal that leads to an opening FBSession. If necessary, this can lead to entering Facebook.

    - - a signal that retrieves profile data through a session, which is transmitted as self.

    The advantage of this approach is that for the user the entire flow is fuzzy, represented by a single signal that can be canceled at any “stage”, be it a Facebook login or a Backend call.

    Filter

    - (RACSignal *)filter:(BOOL (^)(id value))block;

    The resulting stream contains the values ​​of stream “a” filtered according to the specified function.


    RACSequence *sequence = @[@"Some", @"example", @"of", @"sequence"].rac_sequence; RACSequence *filteredSequence = ; )];

    Map

    - (RACSignal *)map:(id (^)(id value))block;

    Unlike FlattenMap, Map runs synchronously. The value of property “a” is passed through the given function f(x + 1) and returns the mapped original value.


    Let's say you need to enter the title of a model on the screen, applying some attributes to it. Map comes into play when “Applying some attributes” is described as a separate function:

    RAC(self.titleLabel, text) = initWithString:modelTitle attributes:attributes]; )];

    How we work: unites self.titleLabel.text with changes model.title by applying custom attributes to it.

    Zip

    + (RACSignal *)zip:(id )streams reduce:(id (^)())reduceBlock;

    Result stream events are created when each thread has generated an equal number of events. It contains values, one from each of the 3 merged streams.


    For some practical examples, zip can be described as dispatch_group_notify For example, you have 3 separate signals and need to combine their responses at a single point:

    NSArray *signals = @; return ;

    - (RACSignal *)throttle:(NSTimeInterval)interval;

    Using a timer set for a certain period of time, the first value of stream “a” is transferred to the result stream only when the timer ends. In case a new value is produced within a given time interval, it holds the first value, preventing it from being passed to the result stream. Instead, a second value appears in the result stream.


    Amazing case: we need to search for a query when the user changes the searchField. Standard task, right? However, it is not very efficient for constructing and sending a network request whenever text changes, since textField can generate many such events per second, and you end up with inefficient network usage.
    The solution here is to add a delay before we actually complete the network request. This is usually achieved by adding an NSTimer. With ReactiveCocoa it's much easier!

    [[ throttle:0.3] subscribeNext:^(NSString *text) ( // perform network request )];

    *The important note here is that all "previous" textFields are changed before the "latest" ones are deleted.

    Delay

    - (RACSignal *)delay:(NSTimeInterval)interval;

    The value received in stream “a” is delayed and transferred to the result stream after a certain time interval.


    Like -, delay will only delay the sending of “next” and “completed” events.

    [ subscribeNext:^(NSString *text) ( )];

    What we love about Reactive Cocoa

    • Introduces Cocoa Bindings to iOS
    • Ability to create operations on future data. Here's some theory about futures & promises from Scala.
    • The ability to represent asynchronous operations in a synchronous manner. Reactive Cocoa simplifies asynchronous software such as network code.
    • Convenient decomposition. Code that deals with user events and application state changes can become very complex and confusing. Reactive Cocoa makes dependent operation models especially simple. When we represent operations as bundled threads (for example, processing network requests, user events, etc.), we can achieve high modularity and loose coupling, which leads to more frequent code reuse.
    • Behaviors and relationships between properties are defined as declarative.
    • Solves synchronization problems - if you combine multiple signals, then there is one single place to handle all the results (be it next value, completion signal or errors)

    With the RAC framework, you can create and transform sequences of values ​​in a better, higher-level way. RAC makes it easier to manage everything that waits for an asynchronous operation to complete: the network response, the dependent value change, and the subsequent reaction. He may seem difficult to deal with at first, but ReactiveCocoa is infectious!

    I want to tell you about a modern programming discipline that meets the growing demands for scalability, fault tolerance and responsiveness, and is indispensable in both multi-core environments and cloud computing, and also introduce an open online course on it, which will start in just a few days.

    If you haven't heard anything about reactive programming, that's okay. This is a rapidly developing discipline that combines concurrency with event-drivenness and asynchrony. Reactivity is inherent in any web service and distributed system, and serves as the core of many high-performance, highly parallel systems. In short, the course authors propose to consider reactive programming as a natural extension of functional programming (with higher-order functions) to parallel systems with distributed state, coordinated and orchestrated by asynchronous data streams exchanged between active subjects, or actors.

    This is described in more understandable terms in the Reactive Manifesto; below I will retell the main points from it, and the full translation is published on Habré. According to Wikipedia, the term reactive programming has existed for quite a long time and has practical applications of varying degrees of exoticism, but it received a new impetus for development and distribution quite recently, thanks to the efforts of the authors of the Reactive Manifesto - an initiative group from Typesafe Inc. Typesafe is known in the functional programming community as the company founded by the authors of the beautiful Scala language and the revolutionary Akka parallel platform. They are now positioning their company as the creator of the world's first next-generation jet platform. Their platform allows you to quickly develop complex user interfaces and provides a new level of abstraction over parallel computing and multi-threading, reducing their inherent risks with guaranteed predictable scaling. It puts the ideas of the Reactive Manifesto into practice and allows the developer to conceptualize and create applications that meet modern needs.

    You can become familiar with the framework and reactive programming by taking the Principles of Reactive Programming massive open online course. This course is a continuation of Martin Odersky's Principles of Functional Programming on Scala, which has over 100,000 participants and has one of the highest success rates in the world for a massive open online course by its participants. Together with the creator of the Scala language, the new course is taught by Eric Meyer, who developed the Rx environment for reactive programming under .NET, and Roland Kuhn, who currently leads the Akka development team at Typesafe. The course covers the key elements of reactive programming and shows how they are used to design event-driven systems that are scalable and fault-tolerant. The educational material is illustrated with short programs and accompanied by a set of tasks, each of which is a software project. If all tasks are successfully completed, participants receive certificates (of course, both participation and certificates are free). The course lasts 7 weeks and starts this Monday, November 4th. A detailed outline, as well as an introductory video, are available on the course page: https://www.coursera.org/course/reactive.

    For those who are interested or in doubt, I offer a condensed summary of the basic concepts of the Reactive Manifesto. Its authors note significant changes in application requirements in recent years. Today, applications are deployed in any environment from mobile devices to cloud clusters with thousands of multi-core processors. These environments place new demands on software and technology. In previous generation architectures, the emphasis was on managed servers and containers, and scaling was achieved through additional expensive hardware, proprietary solutions, and parallel computing through multi-threading. A new architecture is now evolving, in which four important features can be identified that are increasingly prevalent in both consumer and corporate industrial environments. Systems with such an architecture are: event-driven, scalable, fault-tolerant and have a fast response, i.e. responsive. This provides a seamless user experience with a real-time feel, supported by a self-healing, scalable application stack ready for deployment in multi-core and cloud environments. Each of the four characteristics of reactive architecture applies to the entire technology stack, which distinguishes them from links in layered architectures. Let's look at them in a little more detail.


    Event-driven applications assume asynchronous communication components and implement their loosely coupled design: the sender and recipient of the message do not need information about each other or about the method of transmitting the message, which allows them to concentrate on the content of the communication. In addition to the fact that loosely coupled components significantly improve the maintainability, extensibility and evolution of the system, the asynchrony and non-blocking nature of their interaction also makes it possible to free up a significant part of resources, reduce call time and ensure O greater throughput compared to traditional applications. It is the event-driven nature that makes the remaining features of reactive architecture possible.

    Scalability in the context of reactive programming, this is the system’s response to load changes, i.e. elasticity, achieved by the ability to add or release compute nodes as needed. With low coupling, asynchronous messaging, and location transparency, the application's deployment method and topology become a deployment time decision and a matter of configuration and load-responsive algorithms. Thus, the computer network becomes part of an application that initially has an explicit distributed nature.

    Fault tolerance Reactive architecture is also becoming part of the design, and this makes it significantly different from traditional approaches to ensuring continuous system availability by redundant servers and taking over control in the event of failure. The resilience of such a system is achieved by its ability to correctly respond to failures of individual components, isolate these failures by storing their context in the form of messages that caused them, and pass these messages to another component that can make decisions about how to handle the error. This approach allows you to keep the business logic of the application pure, separating from it the logic for handling failures, which is formulated in an explicit declarative form to register, isolate, and handle failures using the system itself. To build such self-healing systems, components are ordered hierarchically, and the problem is escalated to the level that can solve it.

    And finally responsiveness- this is the ability of the system to respond to user input regardless of load and failures; such applications involve the user in interaction, create a feeling of close connection with the system and sufficient equipment to perform current tasks. Responsiveness is not only relevant in real-time systems, but is also necessary for a wide range of applications. Moreover, a system that is unable to respond quickly even at the moment of failure cannot be considered fault-tolerant. Responsiveness is achieved by using observable models, event streams, and stateful clients. Observable models generate events when their state changes and enable real-time interaction between users and systems, and event streams provide the abstraction on which this interaction is built through non-blocking asynchronous transformations and communications.

    Thus, reactive applications represent a balanced approach to solving a wide range of problems in modern software development. Built on an event-driven foundation, they provide the tools needed to guarantee scalability and fault tolerance and support a rich, responsive user experience. The authors expect that an increasing number of systems will adhere to the principles of the reactive manifesto.

    In addition, I provide the course plan without translation. Just in case you've read this far and are still interested.

    Week 1: Review of Principles of Functional Programming: substitution model, for-expressions and how they relate to monads. Introduces a new implementation of for-expressions: random value generators. Shows how this can be used in randomized testing and gives an overview of ScalaCheck, a tool which implements this idea.

    Week 2: Functional programming and mutable state. What makes an object mutable? How this impacts the substitution model. Extended example: Digital circuit simulation

    Week 3: Futures. Introduces futures as another monad, with for-expressions as concrete syntax. Shows how futures can be composed to avoid thread blocking. Discusses cross-thread error handling.

    Week 4: Reactive stream processing. Generalizing futures to reactive computations over streams. Stream operators.

    Week 5: Actors. Introduces the Actor Model, actors as encapsulated units of consistency, asynchronous message passing, discusses different message delivery semantics (at most once, at least once, exactly once) and eventual consistency.

    Week 6: Supervision. Introduces reification of failure, hierarchical failure handling, the Error Kernel pattern, lifecycle monitoring, discusses transient and persistent state.

    Week 7: Conversation Patterns. Discusses the management of conversational state between actors and patterns for flow control, routing of messages to pools of actors for resilience or load balancing, acknowledgment of reception to achieve reliable delivery.

    Let's go.

    Reactive programming at first sounds like the name of an emerging paradigm, but in fact, it refers to a programming method that uses an event-driven approach to work with asynchronous data streams. Based on constantly current data, reactive systems respond to it by executing a series of events.
    Reactive programming follows the Observer design pattern, which can be defined as follows: if a state change occurs in one object, then all other objects are notified and updated accordingly. So, instead of polling events for changes, events are pushed asynchronously so that observers can process them. In this example, observers are functions that are executed when an event is dispatched. And the data stream mentioned is the actual observable one.

    Almost all languages ​​and frameworks use this approach in their ecosystem, and the latest versions of Java are no exception. In this article, I will explain how you can apply reactive programming using the latest version of JAX-RS in Java EE 8 and Java 8 functionality.

    Reactive Manifesto

    The Reactive Manifesto lists four fundamental aspects that an application needs to be more flexible, loosely coupled, and easier to scale, and therefore capable of being reactive. It states that the application must be responsive, flexible (and therefore scalable), resilient and message-driven.

    The underlying goal is a truly responsive application. Let's say we have an application where one large thread handles user requests, and once the work is done, that thread sends responses back to the original requesters. When an application receives more requests than it can handle, this thread becomes a bottleneck and the application loses its former responsiveness. To remain responsive, the application must be scalable and resilient. An application that has auto-recovery functionality can be considered stable. In the experience of most developers, only message-driven architecture allows an application to be scalable, stable and responsive.

    Reactive programming began to be adopted in Java 8 and Java EE 8. The Java language introduced concepts such as CompletionStage and its implementation CompletableFuture, and Java began using these features in specifications such as the Reactive Client API in JAX-RS.

    JAX-RS 2.1 Reactive Client API

    Let's see how reactive programming can be used in Java EE 8 applications. To understand the process, you need a basic knowledge of the Java EE API.

    JAX-RS 2.1 introduced a new way to create a REST client with support for reactive programming. The default invoker implementation offered in JAX-RS is synchronous, which means that the client being created will send a blocking call to the server's endpoint. An example implementation is presented in Listing 1.

    Response response = ClientBuilder.newClient() .target("http://localhost:8080/service-url") .request() .get();
    Since version 2.0, JAX-RS provides support for creating an asynchronous invoker on the client API with a simple call to the async() method, as shown in Listing 2.

    Future response = ClientBuilder.newClient() .target("http://localhost:8080/service-url") .request() .async() .get();
    Using an asynchronous invoker on the client returns a Future instance of type javax.ws.rs.core.Response . This could result in polling for a response, calling future.get(), or registering a callback that will be called when there is an available HTTP response. Both implementations are suitable for asynchronous programming, but things tend to get more complicated if you want to group callbacks or add conditional cases to these asynchronous execution minima.

    JAX-RS 2.1 provides a reactive way to overcome these problems with the new JAX-RS Reactive Client API for client assembly. It's as simple as calling the rx() method during client build. In Listing 3, the rx() method returns a reactive invoker that exists while the client is running, and the client returns a response with type CompletionStage.rx() , which allows the transition from a synchronous invoker to an asynchronous one with a simple call.

    CompletionStage response = ClientBuilder.newClient() .target("http://localhost:8080/service-url") .request() .rx() .get();
    CompletionStage<Т>is a new interface introduced in Java 8. It represents a calculation that can be a step within a larger calculation, as the name suggests. This is the only representative of Java 8 reactivity that made it into JAX-RS.
    After receiving the response instance, I can call AcceptAsync() where I can provide a piece of code that will be executed asynchronously when the response becomes available, as shown in Listing 4.

    Response.thenAcceptAsync(res -> ( Temperature t = res.readEntity(Temperature.class); //do stuff with t ));
    Adding reactivity to a REST endpoint

    The reactive approach is not limited to the client side in JAX-RS; it can be used on the server side as well. As an example, I'll first create a simple script where I can request a list of locations for a single destination. For each position I will make a separate call with the location data to a different point to get the temperature values. The interaction of destinations will be as shown in Figure 1.

    Figure 1. Interaction between destinations

    First I just define the scope model and then the services for each model. Listing 5 shows how a Forecast class is defined, which wraps the Location and Temperature classes.

    Public class Temperature ( private Double temperature; private String scale; // getters & setters ) public class Location ( String name; public Location() () public Location(String name) ( this.name = name; ) // getters & setters ) public class Forecast ( private Location location; private Temperature temperature; public Forecast(Location location) ( this.location = location; ) public Forecast setTemperature(final Temperature temperature) ( this.temperature = temperature; return this; ) // getters )
    To wrap the forecast list, the ServiceResponse class is implemented in Listing 6.

    Public class ServiceResponse ( private long processingTime; private List forecasts = new ArrayList<>(); public void setProcessingTime(long processingTime) ( this.processingTime = processingTime; ) public ServiceResponse forecasts(List forecasts) ( this.forecasts = forecasts; return this; ) // getters )
    The LocationResource shown in Listing 7 defines three sample locations returned with the /location path.

    @Path("/location") public class LocationResource ( @GET @Produces(MediaType.APPLICATION_JSON) public Response getLocations() ( List locations = new ArrayList<>(); locations.add(new Location("London")); locations.add(new Location("Istanbul")); locations.add(new Location("Prague")); return Response.ok(new GenericEntity >(locations)()).build(); ) )
    TemperatureResource, shown in Listing 8, returns a randomly generated temperature value between 30 and 50 for a given location. A delay of 500 ms was added to the implementation to simulate the reading of the sensor.

    @Path("/temperature") public class TemperatureResource ( @GET @Path("/(city)") @Produces(MediaType.APPLICATION_JSON) public Response getAverageTemperature(@PathParam("city") String cityName) ( Temperature temperature = new Temperature(); temperature.setTemperature((double) (new Random().nextInt(20) + 30)); temperature.setScale("Celsius"); try ( Thread.sleep(500); ) catch (InterruptedException ignored) ( ignored.printStackTrace(); ) return Response.ok(temperature).build() ) )
    First I'll show an implementation of a synchronous ForecastResource (see Listing 9) that returns all locations. Then, for each position, it calls the temperature service to get the values ​​in degrees Celsius.

    @Path("/forecast") public class ForecastResource ( @Uri("location") private WebTarget locationTarget; @Uri("temperature/(city)") private WebTarget temperatureTarget; @GET @Produces(MediaType.APPLICATION_JSON) public Response getLocationsWithTemperature () ( long startTime = System.currentTimeMillis(); ServiceResponse response = new ServiceResponse(); List locations = locationTarget .request() .get(new GenericType >()()); locations.forEach(location -> ( Temperature temperature = temperatureTarget .resolveTemplate("city", location.getName()) .request() .get(Temperature.class); response.getForecasts().add(new Forecast(location) .setTemperature(temperature)); long endTime = System.currentTimeMillis(); response.setProcessingTime(endTime - startTime); return Response.ok(response).build(); ) )
    When a forecast destination is queried as /forecast , you'll get output similar to that in Listing 10. Note that the query processing time took 1.533 ms, which makes sense since querying temperature values ​​synchronously from three different locations adds up to 1.5 ms.

    ( "forecasts": [ ( "location": ( "name": "London"), "temperature": ( "scale": "Celsius", "temperature": 33 ) ), ( "location": ( "name ": "Istanbul" ), "temperature": ( "scale": "Celsius", "temperature": 38 ) ), ( "location": ( "name": "Prague" ), "temperature": ( "scale ": "Celsius", "temperature": 46 ) ) ], "processingTime": 1533 )
    So far everything is going according to plan. It's time to introduce reactive programming on the server side, where calls to each location can be made in parallel after all locations are retrieved. This can clearly improve the synchronous flow shown earlier. This is done in Listing 11, which shows the definition of a reactive version of the forecast service.

    @Path("/reactiveForecast") public class ForecastReactiveResource ( @Uri("location") private WebTarget locationTarget; @Uri("temperature/(city)") private WebTarget temperatureTarget; @GET @Produces(MediaType.APPLICATION_JSON) public void getLocationsWithTemperature (@Suspended final AsyncResponse async) ( long startTime = System.currentTimeMillis(); // Create a stage to retrieve locations CompletionStage > locationCS = locationTarget.request() .rx() .get(new GenericType >() ()); // By creating a separate stage in the locations stage // described above, collect a list of predictions // as in one big CompletionStage final CompletionStage > forecastCS = locationCS.thenCompose(locations -> ( // Create a stage to receive forecasts // as a CompletionStage List > forecastList = // Stream locations and process each // of them separately locations.stream().map(location -> ( // Create a stage to obtain // temperature values ​​for only one city // by its name final CompletionStage tempCS = temperatureTarget .resolveTemplate("city", location.getName()) .request() .rx() .get(Temperature.class); // Then create a CompletableFuture that // contains a forecast instance // with location and temperature value return CompletableFuture.completedFuture(new Forecast(location)) .thenCombine(tempCS, Forecast::setTemperature); )).collect(Collectors.toList()); // Return the final CompletableFuture instance, // where all presented completable future objects // are completed return CompletableFuture.allOf(forecastList.toArray(new CompletableFuture)) .thenApply(v -> forecastList.stream() .map(CompletionStage::toCompletableFuture) .map(CompletableFuture::join) .collect(Collectors.toList())); )); // Create a ServiceResponse instance that // contains the full list of predictions // along with processing times. // Create its future and combine it with // forecastCS to get forecasts // and insert it into the service response CompletableFuture.completedFuture(new ServiceResponse()) .thenCombine(forecastCS, ServiceResponse::forecasts) .whenCompleteAsync((response, throwable) - > ( response.setProcessingTime(System.currentTimeMillis() - startTime); async.resume(response); )); ) )
    The reactive implementation may seem complicated at first glance, but after a closer look you will notice that it is quite simple. In the ForecastReactiveResource implementation, I first create a client call to the location services using the JAX-RS Reactive Client API. As I mentioned above, this is an add-on for Java EE 8 and it helps you create a reactive call simply by using the rx() method.

    Now I create a new stage based on location to collect a list of forecasts. They will be stored as a list of forecasts in one large completion stage called forecastCS. Ultimately, I'll create a service call response using only forecastCS .

    Now, let’s collect forecasts in the form of a list of completion stages defined in the forecastList variable. To create a completion stage for each forecast, I pass in the location data and then create a tempCS variable, again using the JAX-RS Reactive Client API, which calls the temperature service with the city name. Here I use the resolveTemplate() method to build the client and this allows me to pass the city name to the builder as a parameter.

    As the final streaming step, I make a call to CompletableFuture.completedFuture() , passing the new Forecast instance as a parameter. I combine this future with a tempCS stage so that I have a temperature value for the iterated locations.

    The CompletableFuture.allOf() method in Listing 11 converts the list of completion stages into forecastCS . Running this step returns a large completable future instance when all provided completable future objects are complete.

    The service response is an instance of the ServiceResponse class, so I create a completed future and then combine the forecastCS completion stage with a list of forecasts and calculate the service response time.

    Of course, reactive programming only forces the server side to execute asynchronously; the client side will be blocked until the server sends a response back to the requester. To overcome this problem, Server Sent Events (SSEs) can be used to send a partial response as soon as it is available, so that the temperature values ​​for each location are sent to the client one by one. The output of ForecastReactiveResource will be similar to that shown in Listing 12. As shown in the output, the processing time is 515 ms, which is the ideal execution time for retrieving temperature values ​​from a single location.

    ( "forecasts": [ ( "location": ( "name": "London"), "temperature": ( "scale": "Celsius", "temperature": 49 ) ), ( "location": ( "name ": "Istanbul" ), "temperature": ( "scale": "Celsius", "temperature": 32 ) ), ( "location": ( "name": "Prague" ), "temperature": ( "scale ": "Celsius", "temperature": 45 ) ) ], "processingTime": 515 )
    Conclusion

    In the examples in this article, I first showed a synchronous way to get forecasts using location and temperature services. Then, I moved to a reactive approach to allow asynchronous processing to happen between service calls. When you use the JAX-RS Reactive Client API in Java EE 8 along with the CompletionStage and CompletableFuture classes available in Java 8, the power of asynchronous processing is unleashed through reactive programming.

    Reactive programming is more than just implementing an asynchronous model from a synchronous one; it also makes it easier to work with concepts like nesting stage. The more it is used, the easier it will be to manage complex scenarios in parallel programming.

    Thank you for your attention. As always, we welcome your comments and questions.

    You can help and transfer some funds for the development of the site

    Today, there are a number of methodologies for programming complex real-time systems. One such methodology is called FRP. It has adopted a design pattern called Observer (Observer) on the one hand, and on the other, as you might guess, the principles of functional programming. In this article we will look at functional reactive programming using the example of its implementation in the library Sodium for language Haskell.

    Theoretical part

    Traditionally, programs are divided into three classes:

    • Batch
    • Interactive
    • Jet

    The differences between these three classes of programs lie in the type of interaction they have with the outside world.

    The execution of batch programs is in no way synchronized with the outside world. They can be launched at some arbitrary point in time and completed when the original data is processed and the result is obtained. Unlike batch programs, interactive and reactive programs exchange data with the outside world continuously during their work from the moment they start until they stop.

    A reactive program, unlike an interactive one, is tightly synchronized with the external environment: it is required to respond to events in the external world with an acceptable delay in the rate at which these events occur. At the same time, an interactive program is allowed to make the external environment wait. For example, a text editor is an interactive program: the user who starts the process of correcting errors in the text must wait for it to complete in order to continue editing. However, the autopilot is a reactive program, because if an obstacle occurs, it must immediately correct the course to perform a flyby maneuver, because the real world cannot be paused like a text editor user.

    These two classes of programs appeared somewhat later, when computers began to be used to control machines and user interfaces began to develop. Since then, many implementation methods have changed, each successive of which was designed to correct the shortcomings of the previous ones. At first these were simple event-driven programs: the system identified a certain set of events to which it was necessary to respond, and created handlers for these events. Handlers, in turn, could also generate events that went to the outside world. This model was simple and solved a certain range of simple interactive problems.

    But over time, interactive and reactive programs became more complex and event-driven programming became hell. There is a need for more advanced tools for the synthesis of event-driven systems. There were two main flaws in this methodology:

    • Implicit state
    • Non-determinism

    To fix this, a template was first created observer (observer), transforming events into time-varying meanings. The set of these values ​​represented the explicit state the program was in. Thus, event processing was replaced by data processing, which is familiar to batch programs. The developer could change observed values ​​and subscribe handlers to change these values. It became possible to implement dependent values ​​that changed according to a given algorithm when the values ​​on which they depended changed.

    Although developments in this approach have passed, the need for them still remains. An event does not always imply a change in meaning. For example, a real-time event implies that the time counter will increment by a second, but an event where the alarm goes off every day at a certain time does not imply any meaning at all. Of course, you can associate this event with a certain meaning, but this will be an artificial device. For example, we can enter the value: alarm_time == remainder_of_division (current_time, 24*60*60). But this will not be exactly what we are interested in, since this variable is tied to the second, and in fact the value changes twice. To determine that the alarm has gone off, the subscriber must detect that the value has become true, not the other way around. The value will be true for exactly one second, and if we change the tick period from a second, say, to 100 milliseconds, then the true value will no longer be a second, but these 100 milliseconds.

    The emergence of the functional reactive programming methodology is a kind of response from functionalists to the pattern observer. In FRP, the developed approaches were rethought: Events did not go away, however, observed values ​​​​also appeared and were named characteristics (Behaviors). An event in this methodology is a discretely generated value, and a characteristic is a continuously generated value. The two can be related: characteristics can generate events, and events can act as sources for characteristics.

    The problem of state indeterminacy is much more complex. It stems from the fact that event-driven systems are de facto asynchronous. This entails the emergence of intermediate states of the system, which may be unacceptable for some reason. To solve this problem, so-called synchronous programming appeared.

    Practical part

    Library Sodium appeared as an implementation project FRP with a common interface in different programming languages. It contains all the elements of the methodology: primitives (Event, Behavior) and patterns for their use.

    Primitives and interaction with the outside world

    The two main primitives we have to work with are:

    • Event a- event with a value of type a
    • Behavior a- characteristic (or changing value) of the type a

    We can create new events and values ​​using functions newEvent And newBehavior:

    NewEvent:: Reactive (Event a, a -> Reactive ()) newBehavior:: a -> Reactive (Behavior a, a -> Reactive ())

    As we can see, both of these functions can only be called in the monad Reactive, as a result, the primitive itself is returned, as well as the function that must be called to activate the event or change the value. The feature creation function takes an initial value as its first argument.

    To connect the real world to a reactive program there is a function sync, and to connect the program with the outside world there is a function listen:

    Sync:: Reactive a -> IO a listen:: Event a -> (a -> IO ()) -> Reactive (IO ())

    The first function, as the name suggests, executes some reactive code synchronously, it allows you to get into the context Reactive from context IO, and the second is used to add event handlers that occur in the context Reactive executing in context IO. Function listen returns a function unlisten, which must be called to detach the handler.

    In this way, a unique transaction mechanism is implemented. When we do something inside the Reactive monad, the code is executed within one transaction at the time the function is called sync. State is deterministic only outside the context of a transaction.

    This is the basis of reactive functional programming, which is sufficient for writing programs. It may be a little confusing that you can only listen to events. This is exactly how it should be, as we will see later there are close relationships between events and characteristics.

    Operations on basic primitives

    For convenience, additional functions have been added to the methodology that transform events and characteristics. Let's look at some of them:

    An event that will never happen -- (can be used as a stub) never:: Event a -- Merge two events of the same type into one -- (useful for defining one handler for an event class) merge:: Event a -> Event a -> Event a -- Extracts values ​​from Maybe events -- (separates the wheat from the chaff) filterJust:: Event (Maybe a) -> Event a -- Turns an event into a characteristic with an initial value -- (changes the value when events occur) hold :: a -> Event a -> Reactive (Behavior a) -- Turns a characteristic into an event -- (we generate events when the value changes) updates:: Behavior a -> Event a -- Turns a characteristic into an event -- (also generates an event for the initial value) value:: Behavior a -> Event a -- When an event occurs, takes the value of the characteristic, -- applies the function and generates an event snapshot:: (a -> b -> c) -> Event a -> Behavior b - > Event c -- Gets the current value of the characteristic sample:: Behavior a -> Reactive a -- Combines repeated events into one coalesce:: (a -> a -> a) -> Event a -> Event a -- Suppresses all events , except the first once:: Event a -> Event a -- Splits an event with a list into several events split:: Event [a] -> Event a

    Spherical examples in vacuum

    Let's try to write something:

    Import FRP.Sodium main = do sync $ do -- create event (e1, triggerE1)<- newEvent -- создаём характеристику с начальным значением 0 (v1, changeV1) <- newBehavior 0 -- определяем обработчик для события listen e1 $ \_ ->putStrLn $ "e1 triggered" -- define a handler for changing the value of the listen characteristic (value v1) $ \v -> putStrLn $ "v1 value: " ++ show v -- Generate an event without the value triggerE1 () -- Change the value of the changeV1 characteristic 13

    Let's install the package Sodium by using Cabal and run the example in the interpreter:

    # if we want to work in a separate sandbox # create it > cabal sandbox init # install > cabal install sodium > cabal repl GHCi, version 7.6.3: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. # load the example Prelude> :l Example.hs Compiling Main (Example.hs, interpreted) Ok, modules loaded: Main. # run the example *Main>

    Now let's experiment. Let's comment out the line where we change our value (changeV1 13) and restart the example:

    *Main> :l Example.hs Compiling Main (Example.hs, interpreted) Ok, modules loaded: Main. *Main> main e1 triggered v1 value: 0

    As we can see, the initial value is now output, this happens because the function value generates the first event with the initial value of the characteristic. Let's replace the function value on updates and let's see what happened:

    *Main> :l Example.hs Compiling Main (Example.hs, interpreted) Ok, modules loaded: Main. *Main> main e1 triggered

    Now the initial value is not printed, but if we uncomment the line where we changed the value, the changed value will still be printed. Let's return everything as it was and generate an event e1 twice:

    *Main> :l Example.hs Compiling Main (Example.hs, interpreted) Ok, modules loaded: Main. *Main> main e1 triggered e1 triggered v1 value: 13

    As you can see, the event also fired twice. Let's try to avoid this, why in the function listen replace the argument e1 on (once e1), thereby creating a new event that fires once:

    *Main> :l Example.hs Compiling Main (Example.hs, interpreted) Ok, modules loaded: Main. *Main> main e1 triggered v1 value: 13

    When an event does not have an argument, the very fact of the presence or absence of the event is important to us, so the function once to combine events is the right choice. However, when the argument is present, this is not always appropriate. Let's rewrite the example as follows:

    <- newEvent (v1, changeV1) <- newBehavior 0 listen e1 $ \v ->putStrLn $ "e1 triggered with: " ++ show v listen (value v1) $ \v -> putStrLn $ "v1 value: " ++ show v triggerE1 "a" triggerE1 "b" triggerE1 "c" changeV1 13

    We will receive, as expected, all events with values ​​in the same order in which they were generated:

    *Main> :l Example.hs Compiling Main (Example.hs, interpreted) Ok, modules loaded: Main. *Main> main e1 triggered with: "a" e1 triggered with: "b" e1 triggered with: "c" v1 value: 13

    If we use the function once With e1, then we will only get the first event, so let's try to use the function coalesce, for which we replace the argument e1 V listen argument (coalesce (\_a -> a) e1):

    *Main> :l Example.hs Compiling Main (Example.hs, interpreted) Ok, modules loaded: Main. *Main> main e1 triggered with: "c" v1 value: 13

    And indeed, we only received the last event.

    More examples

    Let's look at more complex examples:

    Import FRP.Sodium main = do sync $do(e1, triggerE1)<- newEvent -- создаём характеристику, изменяемую событием e1 v1 <- hold 0 e1 listen e1 $ \v ->putStrLn $ "e1 triggered with: " ++ show v listen (value v1) $ \v -> putStrLn $ "v1 value is: " ++ show v -- generate events triggerE1 1 triggerE1 2 triggerE1 3

    Here's the output:

    *Main> :l Example.hs Compiling Main (Example.hs, interpreted) Ok, modules loaded: Main. *Main> main e1 triggered with: 1 e1 triggered with: 2 e1 triggered with: 3 v1 value is: 3

    The characteristic value is displayed only once, although several events were generated. This is the peculiarity of synchronous programming: the characteristics are synchronized with the call sync. To demonstrate this, let's slightly modify our example:

    <- sync $ do (e1, triggerE1) <- newEvent v1 <- hold 0 e1 listen e1 $ \v ->putStrLn $ "e1 triggered with: " ++ show v listen (value v1) $ \v -> putStrLn $ "v1 value is: " ++ show v return triggerE1 sync $ triggerE1 1 sync $ triggerE1 2 sync $ triggerE1 3

    We just brought the event trigger to the outside world and call it in different synchronization phases:

    *Main> :l Example.hs Compiling Main (Example.hs, interpreted) Ok, modules loaded: Main. *Main> main v1 value is: 0 e1 triggered with: 1 v1 value is: 1 e1 triggered with: 2 v1 value is: 2 e1 triggered with: 3 v1 value is: 3

    Now each event displays a new value.

    Other operations on primitives

    Consider the following group of useful functions:

    Merges events using the function mergeWith:: (a -> a -> a) -> Event a -> Event a -> Event a -- Filters events, keeping only those for which the function returns true filterE:: (a -> Bool) -> Event a -> Event a -- Allows you to "turn off" events - when the characteristic is False gate:: Event a -> Behavior Bool -> Event a - Organizes an event converter - with an internal state collectE: : (a -> s -> (b, s)) -> s -> Event a -> Reactive (Event b) -- Organizes a characteristic transformer -- with an internal state collect:: (a -> s -> (b , s)) -> s -> Behavior a -> Reactive (Behavior b) -- Creates a characteristic as a result of accumulation of events accum:: a -> Event (a -> a) -> Reactive (Behavior a)

    Of course, these are not all the functions provided by the library. There are much more exotic things that are beyond the scope of this article.

    Examples

    Let's try these features in action. Let's start with the last one, organizing something like a calculator. Let us have a certain value to which we can apply arithmetic functions and get the result:

    Import FRP.Sodium main = do triggerE1<- sync $ do (e1, triggerE1) <- newEvent -- пусть начальное значение будет равно 1 v1 <- accum (1:: Int) e1 listen (value v1) $ \v ->putStrLn $ "v1 value is: " ++ show v return triggerE1 -- ​​add 1 sync $ triggerE1 (+ 1) -- multiply by 2 sync $ triggerE1 (* 2) -- subtract 3 sync $ triggerE1 (+ (-3) ) -- add 5 sync $ triggerE1 (+ 5) -- raise to the power 3 sync $ triggerE1 (^ 3)

    Let's run:

    *Main> :l Example.hs Compiling Main (Example.hs, interpreted) Ok, modules loaded: Main. *Main> main v1 value is: 1 v1 value is: 2 v1 value is: 4 v1 value is: 1 v1 value is: 6 v1 value is: 216

    It may seem that the set of features is quite meager, but in fact it is not so. After all, we are dealing with Haskell; applicative functors and monads are here to stay. We can perform any operations on characteristics and events that we are used to performing on pure values. As a result, new characteristics and events are obtained. For characteristics, the class functor and applicative functor are implemented, but for events, for obvious reasons, only a functor.

    For example:

    <$>), (<*>)) import FRP.Sodium main = do (setA, setB)<- sync $ do (a, setA) <- newBehavior 0 (b, setB) <- newBehavior 0 -- Новая характеристика a + b let a_add_b = (+) <$>a<*>b -- New characteristic a * b let a_mul_b = (*)<$>a<*>b listen (value a) $ \v -> putStrLn $ "a = " ++ show v listen (value b) $ \v -> putStrLn $ "b = " ++ show v listen (value a_add_b) $ \v - > putStrLn $ "a + b = " ++ show v listen (value a_mul_b) $ \v -> putStrLn $ "a * b = " ++ show v return (setA, setB) sync $ do setA 2 setB 3 sync $ setA 3 sync $setB 7

    This is what will be output in the interpreter:

    λ> main a = 0 b = 0 a + b = 0 a * b = 0 a = 2 b = 3 a + b = 5 a * b = 6 a = 3 a + b = 6 a * b = 9 b = 7 a + b = 10 a * b = 21

    Now let's see how something like this works with events:

    Import Control.Applicative ((<$>)) import FRP.Sodium main = do sigA<- sync $ do (a, sigA) <- newEvent let a_mul_2 = (* 2) <$>a let a_pow_2 = (^2)<$>a listen a $ \v -> putStrLn $ "a = " ++ show v listen a_mul_2 $ \v -> putStrLn $ "a * 2 = " ++ show v listen a_pow_2 $ \v -> putStrLn $ "a ^ 2 = " ++ show v return sigA sync $ do sigA 2 sync $ sigA 3 sync $ sigA 7

    This is what will be output:

    λ> main a = 2 a * 2 = 4 a ^ 2 = 4 a = 3 a * 2 = 6 a ^ 2 = 9 a = 7 a * 2 = 14 a ^ 2 = 49

    The documentation contains a list of instances of classes that are implemented for Behavior And Event, but nothing prevents you from implementing instances of the missing classes.

    The Downside of Reactivity

    Functional reactive programming undoubtedly simplifies the development of complex real-time systems, but there are many aspects that need to be considered when using this approach. Therefore, we will consider here the problems that most often occur.

    Non-simultaneity

    Synchronous programming implies a certain transaction mechanism that ensures the consistency of successively changing system states, and, therefore, the absence of intermediate unexpected states. IN Sodium calls are responsible for transactions sync. Although the state inside a transaction is not defined, it cannot be assumed that everything inside it happens at the same time. The values ​​change in a certain order, which affects the result. For example, the combined use of events and characteristics can cause unexpected effects. Let's look at an example:

    Import Control.Applicative ((<$>)) import FRP.Sodium main = do setVal<- sync $ do (val, setVal) <- newBehavior 0 -- создаём булеву характеристику val >2 let gt2 = (> 2)<$>val -- create an event with values ​​that > ​​2 let evt = gate (value val) gt2 listen (value val) $ \v -> putStrLn $ "val = " ++ show v listen (value gt2) $ \v -> putStrLn $ "val > 2 ? " ++ show v listen evt $ \v -> putStrLn $ "val > 2: " ++ show v return setVal sync $ setVal 1 sync $ setVal 2 sync $ setVal 3 sync $ setVal 4 sync $setVal 0

    You can expect output like this:

    Val = 0 val > 2 ? False val = 1 val > 2 ? False val = 2 val > 2 ? False val = 3 val > 2 ? True val > 2: 3 val = 4 val > 2 ? True val > 2: 4 val = 0 val > 2 ? False

    However, in fact the line val>2:3 will be missing, and at the end there will be a line val>2:0. This happens because the value change event (value val) generated before the dependent characteristic is calculated gt2, and therefore the event evt does not occur for the set value 3. At the end, when we set it to 0 again, the calculation of the characteristic gt2 is late.

    In general, the effects are the same as in analog and digital electronics: signal races, to combat which different techniques are used. In particular, synchronization. This is what we'll do to get this code to work properly:

    Import Control.Applicative ((<$>)) import FRP.Sodium main = do (sigClk, setVal)<- sync $ do -- Мы ввели новое событие clk -- сигнал синхронизации -- прям как в цифровой электронике (clk, sigClk) <- newEvent (val, setVal) <- newBehavior 0 -- Также вы создали альтернативную функцию -- получения значения по сигналу синхронизации -- и заменили все вызовы value на value" let value" = snapshot (\_ v ->v) clk let gt2 = (> 2)<$>val let evt = gate (value" val) gt2 listen (value" val) $ \v -> putStrLn $ "val = " ++ show v listen (value" gt2) $ \v -> putStrLn $ "val > 2 ? " ++ show v listen evt $ \v -> putStrLn $ "val > 2: " ++ show v return (sigClk, setVal) -- Introduced a new function sync" -- which raises a sync signal -- at the end of each transaction - - And replaced all calls to it sync let sync" a = sync $ a >> sigClk () sync" $ setVal 1 sync" $ setVal 2 sync" $ setVal 3 sync" $ setVal 4 sync" $ setVal 0

    Now our output is as expected:

    λ> main val = 0 val > 2 ? False val = 1 val > 2 ? False val = 2 val > 2 ? False val = 3 val > 2 ? True val > 2: 3 val = 4 val > 2 ? True val > 2: 4 val = 0 val > 2 ? False

    Laziness

    Another kind of problem relates to the lazy nature of calculations in Haskell. This leads to the fact that when testing the code in the interpreter, some output at the end may simply be missing. What can be suggested in this case is to perform a useless synchronization step at the end, e.g. sync $return().

    Conclusion

    That's enough for now, I think. Currently one of the authors of the library Sodium writes a book about FRP. Let's hope this will somehow fill the gaps in this area of ​​​​programming and serve to popularize progressive approaches in our ossified minds.