flow adversarial networks: flowrate prediction for gas ...yinyanliu.com/publications/hu_flow...

13
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 1 Flow Adversarial Networks: Flowrate Prediction for Gas–Liquid Multiphase Flows Across Different Domains Delin Hu , Jinku Li , Yinyan Liu , and Yi Li Abstract— The solution of how to accurately and timely predict the flowrate of gas–liquid mixtures is the key to help petroleum and other related industries to reduce costs, improve efficiency, and optimize management. Although numerous studies have been carried out over the past decades, the problem is still significantly challenging due to the complexity of multiphase flows. This paper attempts to seek new possibilities for multiphase flow measurement and novel application scenarios for state-of-the- art machine learning (ML) techniques. Convolutional neural net- works (CNNs) are applied to predict the flowrate of multiphase flows for the first time and can achieve promising performance. In addition, considering the difference between data distributions of training and testing samples and its negative impact on prediction accuracy of the CNN models on testing samples, we propose flow adversarial networks (FANs) that can distill both domain-invariant and flowrate-discriminative features from the raw input. The method is evaluated on dynamic experimental data of different multiphase flows on different flow conditions and operating environments. The experimental results demonstrate that FANs can effectively prevent the accuracy degradation caused by the gap between training and testing samples and have better performance than state-of-the-art approaches in the flowrate prediction field. Index Terms— Convolutional neural network (CNN), domain adaptation, domain adversarial network, flowrate prediction, multiphase flow, regression, transfer learning, venturi tube. NOMENCLATURE E (·; θ e ) feature extractor with parameters θ e P (·; θ p ) label predictor with parameters θ p D(·; θ d ) domain discriminator with parame- ters θ d p time series of multi-channel pressure signals p s time series of multi-channel pressure signals in source domain p t time series of multi-channel pressure signals in target domain Manuscript received July 6, 2018; revised October 31, 2018 and February 28, 2019; accepted March 11, 2019. This work was supported by the National Natural Science Foundation of China under Grant 61571252. (Corresponding author: Yi Li.) D. Hu, Y. Liu, and Y. Li are with the Graduate School at Shenzhen, Tsinghua University, Shenzhen 518055, China (e-mail: [email protected]; [email protected]). J. Li is with the Department of Automation, Tsinghua University, Beijing 100084, China. Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TNNLS.2019.2905082 ˙ m l reference of average liquid mass flowrate ˙ m l s reference of average liquid mass flowrate in source domain ˙ m l t reference of average liquid mass flowrate in target domain ˆ ˙ m l prediction of average liquid mass flowrate ˙ m m l s average liquid mass flowrate of multi- phase flow in source domain ˙ m s l s average liquid mass flowrate of single-phase flow in source domain S ( p, ˙ m l ) joint distribution of p and ˙ m l in source domain T ( p, ˙ m l ) joint distribution of p and ˙ m l in target domain S ( p) marginal distribution of p in source domain T ( p) marginal distribution of p in target domain f feature vector f s feature vector in source domain f t feature vector in target domain F s ( f ) marginal distribution of f in source domain F t ( f ) marginal distribution of f in target domain d reference of domain label ˆ d prediction of domain label I. I NTRODUCTION G AS–LIQUID multiphase flows are commonly observed in various chemical reactors and related equipment of oil and gas fields. Accurately metering parameters of gas–liquid mixtures is quite important for these industries in terms of efficiency improvement, cost reduction, and manage- ment optimization. The traditional method needs to separate the components of the mixtures using separators at first, and then the flowrate of each phase can be measured by conventional single-phase flowmeters, respectively. However, there are numerous shortages and restrictions in practice. For instance, it cannot provide prediction values in time due to the fact that the process of multiphase flow separation usually 2162-237X © 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Upload: others

Post on 10-Mar-2021

14 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Flow Adversarial Networks: Flowrate Prediction for Gas ...yinyanliu.com/publications/Hu_Flow Adversarial Networks_2019.pdf · Flow Adversarial Networks: Flowrate Prediction for Gas–Liquid

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 1

Flow Adversarial Networks: Flowrate Predictionfor Gas–Liquid Multiphase Flows Across

Different DomainsDelin Hu , Jinku Li , Yinyan Liu , and Yi Li

Abstract— The solution of how to accurately and timely predictthe flowrate of gas–liquid mixtures is the key to help petroleumand other related industries to reduce costs, improve efficiency,and optimize management. Although numerous studies have beencarried out over the past decades, the problem is still significantlychallenging due to the complexity of multiphase flows. Thispaper attempts to seek new possibilities for multiphase flowmeasurement and novel application scenarios for state-of-the-art machine learning (ML) techniques. Convolutional neural net-works (CNNs) are applied to predict the flowrate of multiphaseflows for the first time and can achieve promising performance.In addition, considering the difference between data distributionsof training and testing samples and its negative impact onprediction accuracy of the CNN models on testing samples,we propose flow adversarial networks (FANs) that can distillboth domain-invariant and flowrate-discriminative features fromthe raw input. The method is evaluated on dynamic experimentaldata of different multiphase flows on different flow conditions andoperating environments. The experimental results demonstratethat FANs can effectively prevent the accuracy degradationcaused by the gap between training and testing samples andhave better performance than state-of-the-art approaches in theflowrate prediction field.

Index Terms— Convolutional neural network (CNN), domainadaptation, domain adversarial network, flowrate prediction,multiphase flow, regression, transfer learning, venturi tube.

NOMENCLATURE

E(·; θe) feature extractor with parameters θeP(·; θ p) label predictor with parameters θ pD(·; θd) domain discriminator with parame-

ters θdp time series of multi-channel pressure

signalsps time series of multi-channel pressure

signals in source domainpt time series of multi-channel pressure

signals in target domain

Manuscript received July 6, 2018; revised October 31, 2018 andFebruary 28, 2019; accepted March 11, 2019. This work was supported bythe National Natural Science Foundation of China under Grant 61571252.(Corresponding author: Yi Li.)

D. Hu, Y. Liu, and Y. Li are with the Graduate School at Shenzhen, TsinghuaUniversity, Shenzhen 518055, China (e-mail: [email protected];[email protected]).

J. Li is with the Department of Automation, Tsinghua University, Beijing100084, China.

Color versions of one or more of the figures in this paper are availableonline at http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TNNLS.2019.2905082

ml reference of average liquid massflowrate

mls reference of average liquid massflowrate in source domain

mlt reference of average liquid massflowrate in target domain

ˆml prediction of average liquid massflowrate

mmls

average liquid mass flowrate of multi-phase flow in source domain

msls

average liquid mass flowrate ofsingle-phase flow in source domain

S( p, ml ) joint distribution of p and ml in sourcedomain

T ( p, ml) joint distribution of p and ml in targetdomain

S( p) marginal distribution of p in sourcedomain

T ( p) marginal distribution of p in targetdomain

f feature vectorfs feature vector in source domainft feature vector in target domainFs( f ) marginal distribution of f in source

domainFt ( f ) marginal distribution of f in target

domaind reference of domain labeld prediction of domain label

I. INTRODUCTION

GAS–LIQUID multiphase flows are commonly observedin various chemical reactors and related equipment

of oil and gas fields. Accurately metering parameters ofgas–liquid mixtures is quite important for these industries interms of efficiency improvement, cost reduction, and manage-ment optimization. The traditional method needs to separatethe components of the mixtures using separators at first,and then the flowrate of each phase can be measured byconventional single-phase flowmeters, respectively. However,there are numerous shortages and restrictions in practice. Forinstance, it cannot provide prediction values in time due tothe fact that the process of multiphase flow separation usually

2162-237X © 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Page 2: Flow Adversarial Networks: Flowrate Prediction for Gas ...yinyanliu.com/publications/Hu_Flow Adversarial Networks_2019.pdf · Flow Adversarial Networks: Flowrate Prediction for Gas–Liquid

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

2 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

requires several hours or even longer. Moreover, the separatorsare expensive and take a lot of space, which further leads toa significant increment in costs, especially for offshore wells.Petroleum companies are therefore motivated to invest a greatdeal of funds in developing more advanced techniques formultiphase flow measurement, i.e., multiphase flow meters(MFMs), which can estimate parameters of multiphase flowsdirectly without separators [1]–[3].

Differential pressure devices including venturi tube andorifice plate flowmeters are widely used in single-phase flowmeasurement, and many studies about the application of thesedevices to predict the flowrate for multiphase flows have beencarried out during the past decades. One typical method tomeasure the flowrate for the gas phase in gas–liquid mixturesis to compute the flowrate on the basis of differential pressuremeasurements and single-phase flowmeter principles at first,then find a correlation model to correct the error induced bythe presence of the liquid phase. The same procedure can alsobe applied to meter the flowrate for the liquid phase. However,the establishment of correlation models normally requiressimplifications for the dynamic states of multiphase flows dueto the complexity and the lack of relevant fluid knowledge.Therefore, many correlation models are only applicable in thespecific flow conditions and operating environments, whichare consistent with the simplifications. For example, Lin [4],Xu et al. [5], Yuan et al. [6], and Xu et al. [7] proposeddifferent semiempirical measurement models for wet gas,which is defined as the gas–liquid mixtures whose gas volumefraction (GVF) is equal to 95% or higher [8]. These modelsare invalid or cause great errors when metering gas–liquidmultiphase flows under other flow conditions. The approachto broadening the application range and enhance measurementaccuracy of this type of flowmeter has still been challengingand attracts the attention of numerous researchers all over theworld.

Recently, as the development of machine learning (ML) andits successful applications in many other fields, more and morestudies aiming to combine ML and multiphase flow measure-ment have been conducted. Timung and Mandal [9] employedprobabilistic neural networks (NNs) to identify the flow pat-tern of gas–liquid mixtures through circular microchannel.Shaban and Tavoularis [10] applied elastic maps to analyzethe probability density function (PDF) of differential pressuresignals produced by the vertical upward water–air pipe flowfor flow regime recognition. Wu et al. [11] designed anintelligent system based on backpropagation NNs (BPNNs),which can predict the flow pattern of the oil–water–gasmultiphase flow according to characteristic vectors extractedfrom denoised differential pressure signals with fractal the-ory. Sun and Zhang [12] employed BPNNs to discern flowregimes through vortex flowmeter signals. In addition toflow pattern identification, ML techniques are also appliedto measure many other parameters of multiphase flows.For instance, Shaban and Tavoularis [13] developed a newalgorithm for flowrate prediction. In their method, indepen-dent component analysis and principal component analysiswere used to reduce the dimensionality of feature vectorsat first, and then those vectors were used as the input of

different BPNNs to predict flowrates in different flow patterns.Xu et al. [14] established the relation between signals froma throat-extended venturi tube and flowrates of gas–liquidmixtures via BPNNs and support vector machines (SVMs).Fan and Yan [15] extracted five characteristic parameters fromconductance signals as the input of a BPNN model to estimatethe flowrate of the water–air slug flow. Azizi et al. employedBPNNs to determine void fractions in [16] and water-in-liquidratios in [17].

Although these methods have good performance on sen-sor data from dynamic multiphase flow experiments, it isstill difficult to successfully apply them to actual industrialprocesses. The main reason is the difference between the datadistributions of samples obtained on experimental platformsand samples in real industrial production. Reducing and elim-inating the negative impact caused by the difference are thekey to generalize the prediction models fitted by experimentaldata to real industrial applications.

On the one hand, the difference is partially contributed bydifferent fluid mediums between experiments and real produc-tion. The mediums of the gas phases in most of the dynamicmultiphase flow experiments are air. However, the mediums ofthe gas phases of the outputs from different oil or gas wells arenot necessarily air. They could be natural gas, methane, carbondioxide, and so on. Similarly, there is the same problem in theliquid phase. In terms of safety and economy, it is impossiblethat the components of multiphase flows in the laboratory areexactly the same as the outputs of oil or gas wells. In thispaper, two strategies are presented to decrease the negativeeffect of the discrepancy, i.e., selecting an appropriate outputobjective for flowrate prediction models and flow adversarialnetworks (FANs).

On the other hand, improper experimental operations alsomake contributions to data difference. In the related workmentioned before, NNs were trained only with samples gener-ated by stable single-phase flows, i.e., the openings of controlvalves of single-phase flows are kept as constants during thedata collection. However, there is no guarantee that multiphaseflows in real industrial processes are similar to multiphaseflows mixed by stable single-phase flows in experiments. Aim-ing at this problem, we design a more reasonable experimentscheme and data acquisition method, so as to obtain trainingsamples closer to practical applications.

This paper is highly motivated by the strong industrialdemand of petroleum and other related companies. We expectto employ advanced ML techniques to help these companiesto improve efficiency and reduce costs. The summary of ourcontributions is listed as follows.

1) We describe a novel application scenario (i.e., flowrateprediction for multiphase flows) for different ML tech-niques, including CNNs and domain adaptation.

2) We propose a domain adaptation method, FANs,to achieve flowrate prediction for multiphase flows crossdifferent domains. This can effectively mitigate thenegative impact on prediction accuracy caused by thegap between training and testing samples.

3) We conduct dynamic multiphase flow experiments ona semiindustrial experimental platform to collect data

Page 3: Flow Adversarial Networks: Flowrate Prediction for Gas ...yinyanliu.com/publications/Hu_Flow Adversarial Networks_2019.pdf · Flow Adversarial Networks: Flowrate Prediction for Gas–Liquid

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

HU et al.: FANs: FLOWRATE PREDICTION FOR GAS–LIQUID MULTIPHASE FLOWS ACROSS DIFFERENT DOMAINS 3

of different multiphase flows in a large variety of flowconditions and operating environments.

4) We evaluate the effectiveness of CNNs, domain adver-sarial networks (DANs) and our method FANs using themultiphase flow experimental data set and observe thatFANs can outperform the other two methods.

In the rest of this paper, the concepts of the ML tech-niques we employed are introduced in Section II, includingconvolutional NNs (CNNs), generative adversarial networks(GANs), and DANs. The details about the dynamic multiphaseflow experiments and data acquisition method are discussed inSection III. The FANs are proposed in Section IV. Section Vshows experimental results of different networks on differentexperimental data sets. The summary of this paper and futuredirection is presented in Section VI.

II. BACKGROUNDS OF RELATED MACHINE

LEARNING TECHNIQUES

A. Convolutional Neural Networks

CNNs are a powerful tool to process data that comes in theform of multiple arrays [18]. The general structure of CNNsmainly includes two multilayer networks acting as differentroles, namely, a feature extractor and a label predictor. Theinput of a CNN first needs to pass through the feature extractor,and then the feature maps representing the input can beobtained. The feature extractor usually consists of multipleconvolutional layers and pooling layers. A convolutional layeris a set of filters that are actually matrixes or tensors whoseelements are weights. Each result of the discrete convolutionbetween a filter and each patch (local receptive field [19]) ofthe input of the layer is passed through a nonlinear activationfunction, and then all the consequences are organized asone feature map. It means that all units in one feature mapshare the same weights. The operation repeated on differentfilters can generate different feature maps. The combinationof all the feature maps is the output of the convolutionallayer. Fig. 1(a) shows a simple example of the convolutionaloperation.

The role of pooling layers, which usually follow convo-lutional layers, is to merge several neighboring features intoone through typically taking their max or average [20]. Thisoperation can reduce the dimension of feature representationsand the sensitivity to small shifts and distortions [18]. Fig. 1(b)shows a simple example of the maxpooling operation. Multipleconvolutional layers and pooling layers constitute the featureextractor, which can capture salient descriptions of the rawinput.

The structure of the label predictor is much easier. It usuallycontains several fully connected layers and can completeclassification and regression tasks according to the featuremaps produced by the feature extractor.

Over the past decades, CNNs have achieved great successin many domains including computer vision, natural languageprocessing, and so forth. A part of excellent architectures ofCNNs can be found in [21]–[24].

Fig. 1. Examples of convolutional and maxpooling operations. (a) Convo-lutional operation. (b) Maxpooling operation.

Fig. 2. General framework of GANs.

B. Generative Adversarial Networks

GANs proposed by Goodfellow et al. [25] are able togenerate high quality and realistic images. The framework ofGANs mainly contains two different parts (see Fig. 2), i.e., agenerator G and a critic C . G can draw pictures accordingto the input vectors sampled from a random distribution,z ∼ pz(z). C is employed to estimate the distance betweenthe data distributions of pictures generated by G and the realones. The approach to training C is actually the same astraining a binary classifier. The class labels of real imagescan be defined as 1, and the labels for synthetic images canbe defined as 0. Then, the parameters of C are iterativelyupdated by backpropagation to reduce the loss (cross entropy).When training G, the parameters of C are fixed. Then theparameters of G are adjusted to maximize the probability thatC misclassifies the synthetic images as real ones. This step canbe finished by backpropagation as well. As the two trainingprocesses are repeated, the pictures drawn by G will be moreand more realistic. In fact, C and G play a two-player minimaxgame with value function V (G, C) [25]

minG

maxC

V (G, C) = Ex∼pdata(x)[log C(x)]+Ez∼pz(z)[log(1− C(G(z))] (1)

where x is the real image and Pdata(x) is the distribution of x .

Page 4: Flow Adversarial Networks: Flowrate Prediction for Gas ...yinyanliu.com/publications/Hu_Flow Adversarial Networks_2019.pdf · Flow Adversarial Networks: Flowrate Prediction for Gas–Liquid

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

4 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

There is no doubt that GANs make great contributionsto image generation and other related fields. However, thereare also some weaknesses in the original framework ofGANs, such as instability in training and generating numerousrepetitive images [26]. In order to solve the problems, manyrelated studies have been carried out and published recently,such as Wasserstein GANs (WGANs) [27], WGANs with gra-dient penalty (WGANs-GP) [28], deep convolutional GANs(DCGANs) [29], boundary equilibrium GAN (BEGAN) [30],loss sensitive GANs (LSGANs) [31], and so on.

C. Domain Adversarial Networks

In many ML applications, there is a common issue thatthe data distributions of training samples (the source domain)and samples encountered in practice (the target domain) arerelated but not exactly the same. The technique to lessenthe negative impact of the problem on the performance ofthe learning model in the target domain is known as domainadaptation [32]. DANs are one of this kind of techniques andtry to complete the domain adaptation task with the assistanceof an adversarial architecture [33].

The general framework of DANs mainly includes threedifferent multilayer networks, i.e., a feature extractor E , a labelpredictor P , and a domain discriminator D. E is able to extractfeatures representing the raw input. D can determine whetherthe input comes from the source domain or target domainaccording to its features extracted by E . P can completeprediction tasks (classification or regression) on the basis ofthe same features.

The labeled samples in the source domain and the unlabeledsamples in the target domain are both needed during training.D is trained as a binary classifier, and E attempts to extractdomain-invariant features through fooling D. The roles ofE and D are quite similar to the roles of G and C inGANs. A gradient reverse layer is employed to finish theadversarial training between D and E in DANs. In addition,the combination of E and P is expected to establish therelation between the raw input and the corresponding label.Therefore, parameters of E and P should be updated tominimize the difference between the reference and predictionvalues of labels as well [33], [34].

In DANs, E actually attempts to distill domain-invariantfeatures through maximizing the probability that D makesmistakes. However, it does not mean distributions of sourceand target domain features will have high similarity. In ourmethod, E is also expected to capture domain-invariant fea-tures for the better performance of prediction models on targetdomain samples. However, this goal is not achieved by directlymaximizing the loss of D. Instead of that, E is requiredto generate source and target features, which can gain closescores from D, and more details are discussed in Section IV.

In addition to DANs, there are also other studies con-cerning domain adaptation with adversarial architectures. Forinstance, instead of finding domain-invariant features, adver-sarial discriminative domain adaptation (ADDA) [35] managesto generate target domain features that are similar to fea-tures generated by source domain samples. Then the source

Fig. 3. Dynamic multiphase flow experimental facility. (a) Schematic.(b) Actual devices.

predictor can be used on target domain features directly tocomplete classification and regression tasks.

III. EXPERIMENTAL FACILITY AND DATA ACQUISITION

A. Experimental Facility

The dynamic experiments were conducted on multiphaseflow test facility (see Fig. 3). The operating procedure of thefacility is as follows.

1) In order to build an experimental environment as close aspossible to the industrial environment (sealed pipes withpressure), the release valve should be closed and the aircompressor should be activated to provide a reasonablestatic pressure for the whole system.

2) The oil and water single-phase flows are drawn fromthe separator by oil and water pumps, respectively, andthe gas single-phase flow is supplied by the cyclingcompressor.

3) The parameters of all the single-phase flows are recordedby single-phase meters.

4) Changing the opening of the control valve can adjust theflowrate of the corresponding single-phase flow.

Page 5: Flow Adversarial Networks: Flowrate Prediction for Gas ...yinyanliu.com/publications/Hu_Flow Adversarial Networks_2019.pdf · Flow Adversarial Networks: Flowrate Prediction for Gas–Liquid

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

HU et al.: FANs: FLOWRATE PREDICTION FOR GAS–LIQUID MULTIPHASE FLOWS ACROSS DIFFERENT DOMAINS 5

Fig. 4. Structure of the venturi tube.

5) The single-phase flows are mixed to form the multiphaseflow, and then the mixture passes through the MFMunder test.

6) The multiphase flow will eventually enter the separatorto be separated into single-phase flows and reused.

A venturi tube (see Fig. 4) with four different sensorswas mounted at the place of the “MFM” shown in Fig. 3(a)to collect information about the multiphase flow passing thehorizontal pipe. The dynamic pressure P , the differential pres-sure of the convergent section �P1, the differential pressureof the divergent section �P2, and the temperature T weremeasured and recorded. The range of measurements of �P1and �P2 is from 0 to 62.2 kPa. The device was manufacturedin accordance with ISO standards, and more details concerningthe structure and parameters are shown in [36].

B. Data Acquisition

The standard values of the instantaneous parameters forthe multiphase flow are not available during experiments,because there are no accurate and reliable MFMs on themarket. The only available information is the data recordedby the single-phase meters. Considering within a period S,the total volume (vs

l ) of the liquid single-phase flow passingthe single-phase meter is quite close to the total volume (vm

l )of the liquid phase in the multiphase flow passing the venturitube. The problem should be converted to predict the averageflowrate of the multiphase flow over the period S, rather thanthe instantaneous flowrate. The upper bound of the absolutevalue of the absolute error (εv) between the average flowratesfor the liquid phase and the liquid single-phase flow is shownas follows:

|εv | =∣∣∣∣

vsl

S− vm

l

S

∣∣∣∣≤ πr2l

S(2)

where r is the radius for the cross section of the pipe and l isthe length of the experimental pipe between the single-phasemeter and the venturi tube.

Note that, in fact, only in some extreme cases, which arenot likely to happen in practice, |εv | is equal to πr2l/S.In actual multiphase flow experiments, |εv | should be far lessthan πr2l/S. πr2l can be regarded as a constant, so the upperbound of |εv | decreases as S increases. Therefore, we can usevs

l to approximate vml when S is large enough. In this paper,

S is selected as 5 min.The dynamic experiments were divided into three groups

in this paper according to the medium of the liquid phase.One group is the water–air two-phase flow, and the other twogroups are the oil–air two-phase flow and the oil–water–air

TABLE I

EXPERIMENTAL CONDITIONS

TABLE II

FORMULAS OF DIFFERENT KINDS OF FLOWRATES

three-phase flow. The dynamic multiphase flow experimentswere conducted on the facility shown in Fig. 3(b). The detailsconcerning experimental conditions are shown in Table I,where γ is the water-in-liquid ratio (γ = vwater/vliquid).

The flowrate of each single-phase flow was changed every2 min in the two-phase experiments, and every 3 min in thethree-phase experiments. The data for the multiphase flowsformed by stable single-phase flows and transient single-phaseflows should be collected during the experiments. Note that theoperation is different from the multiphase flow experimentsconducted in previous work. In order to investigate the prop-erties of multiphase flows systematically and comprehensively,such as the mechanism behind different flow patterns, the inter-action between different phases and so forth, most of theprevious experiments observed and recorded multiphase flowdata only after the flowrates of single-phase flows were stablebefore mixing. However, from the perspective of flowrateprediction, stable single-phase flows are not indispensable. Theexperiment method in this paper cannot only obtain trainingsamples closer to real industrial production, but also improvethe efficiency of data collection.

The time series of multichannel pressure signals (P , �P1,and �P2), which can characterize the flowrate of the mul-tiphase flow passing the venturi tube, was selected as theinput of our prediction model in this paper. The samplingfrequencies of sensors are 6 Hz. The length of the period Sis 300 s. Consequently, each time series is a 1800× 3 matrix.Average flowrates of liquid single-phase flows are calculatedas reference labels. Table II lists the computational formulasfor different types of flowrates. Fig. 5 shows the examplesconcerning the time series of multichannel normalized pres-sure signals ( p). All the liquid mass flowrates of the examplesshown in Fig. 5 are about 3000 kg/h.

Page 6: Flow Adversarial Networks: Flowrate Prediction for Gas ...yinyanliu.com/publications/Hu_Flow Adversarial Networks_2019.pdf · Flow Adversarial Networks: Flowrate Prediction for Gas–Liquid

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

6 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

Fig. 5. Examples of the time series of multichannel normalized pressure signals. (a) Three examples generated by the water–air two-phase flow. (b) Threeexamples from the oil–air two-phase flow. (c) Examples are from the oil–water–air three-phase flow. The average mass flowrate of the liquid phase for eachexample is approximately 3000 kg/h, but they look pretty different due to the difference among flowrates of gas phases, mediums of liquid phases, flowpatterns, and so forth.

The entire experiments took about 100 h.1 Approximately,3500 samples of the water–air two-phase flow, 3500 samples ofthe oil–air two-phase flow, and 2500 samples of the oil–water–air three-phase flow were collected. Partial overlap betweendifferent samples is acceptable. For instance, it is possiblethat the sample A and the sample B are generated by theexperimental data from the 300th s to the 600th s and the datafrom the 550th s to the 850th s, respectively. Although thesetwo samples overlap in the period between the 550th s andthe 600th s, they can be treated as two different samples.

IV. FLOW ADVERSARIAL NETWORKS

A. Assumption

We assume that source domain samples ( ps, mls ) are gener-ated from a distribution S( p, ml ), and target domain samples( pt , mlt ) are generated from another distribution T ( p, ml).The two distributions are related but not exactly the same.S( p) and T ( p) are marginal distributions of ps and pt .

1Our data are available at https://pan.baidu.com/s/1xBNqaShAB6SUdP-c5vL9JA

Our goal in this paper is to learn a flowrate predictionmodel which can accurately predict mlt according to pt . Theprediction model usually includes two different parts, i.e., afeature extractor E(·; θe) and a label predictor P(·; θ p). Theinput signal p first needs to pass through the feature extractorE(·; θe) and is mapped to the feature vector f , i.e., f =E( p; θe). P(·; θ p) can predict the label (flowrate) ml accord-ing to the feature vector f , i.e., ˆml = P( f ; θ p). Our objectiveis to find optimal parameters θ∗

e and θ∗p, minimizing the error

between the real target label mlt and the prediction ˆmlt

(θ∗e , θ∗

p) = arg minθe,θ p

E( pt ,mlt )∼T ( p,ml )

[

mlt − P(E( pt ; θe); θ p)]2

. (3)

In an ideal case, this goal can be achieved by supervisedlearning using target domain samples ( pt , mlt ). However,in real applications, it is impossible or extremely pricey tocollect sufficient reference information to train a deep architec-ture. In other words, the available information during trainingis labeled source samples ( ps, mls ), unlabeled target samplespt , and domain labels d of samples. d is a binary variable

Page 7: Flow Adversarial Networks: Flowrate Prediction for Gas ...yinyanliu.com/publications/Hu_Flow Adversarial Networks_2019.pdf · Flow Adversarial Networks: Flowrate Prediction for Gas–Liquid

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

HU et al.: FANs: FLOWRATE PREDICTION FOR GAS–LIQUID MULTIPHASE FLOWS ACROSS DIFFERENT DOMAINS 7

which indicates the domain of p. If p is a source sample,i.e., p � S( p), d = 0. Otherwise, p � T ( p), and d = 1. θeand θ p need to be determined by those information. Actually,it can be regarded as a domain adaptation problem. We needto train a prediction model with source samples ( ps, mls ), andexplore how to apply that model to the target domain. In thissection, we present FANs to address the issue.

B. Formulation

Source domain samples ps and target domain samples ptcan be, respectively, mapped to source domain feature vectorsfs and target domain feature vectors ft through the featureextractor E( p; θe). fs and ft subject to the source domain fea-ture distribution Fs and the target domain feature distributionFt , respectively, i.e., fs ∼ Fs( f ) and ft ∼ Ft ( f ). If E( p; θe)were merely learned with source domain samples, it wouldbe difficult for E( p; θe) to capture correct descriptions oftarget domain samples due to the difference between datadistributions of source and target domains. Consequently, thereis a gap between Fs( f ) and Ft ( f ). This gap will causea decline in target domain prediction accuracy when thelabel predictor P( f ; θ p) determines the flowrate accordingto f . If we can narrow the difference between Fs( f ) andFt ( f ), the negative impact will definitely be mitigated. Thus,we expect E( p; θe) to produce both domain-invariant andflowrate-discriminative features. In other words, Fs( f ) andFt ( f ) should be as similar as possible. Meanwhile, P( f ; θ p)can predict the flowrate accurately on the basis of f .

We introduce a domain discriminator D(·; θd) to helpE( p; θe) distill domain-invariant features. The input ofD(·; θd) is the features f , and the output is the prediction(d) of the domain label, i.e., d = D( f ; θd). Similar to GANs,D( f ; θd) actually can measure the distance between Fs( f )and Ft ( f ). If the domain label d produced by D( f ; θd) is thesame as the real domain label d , the gap between fs and ft issignificant enough to be noticed by D( f ; θd). On the contrary,if D( f ; θd) cannot classify source domain features fs andtarget domain features ft correctly, the gap is not remarkablefor it.

The general framework of FANs includes three parts,i.e., E( p; θe), P( f ; θ p), and D( f ; θd). Each part plays adifferent role. First of all, D( f ; θd) is expected to distinguishsource and target domain features. We can achieve this goalvia updating θd to minimize the loss function L�

d which isdefined as follows:

L�d( ps, pt ; θe, θd)

= E ps�S( p)

[

D(E( ps; θe); θd)]

−E pt �T ( p)

[

D(E( pt ; θe); θd))]

+ηE f �F f ( f )

[

(�∇ f D( f ; θd)�2 − 1)2]

. (4)

We can also train D( f ; θd) with the cross entropy loss asa binary classifier. However, in that case, D( f ; θd) is actuallyto measure the Jensen–Shannon divergence between Fs( f )and Ft ( f ). This would lead to instability and difficulty intraining [26]. Instead of that, we employ the loss function ofWGANs [27], [28], which measures the Wasserstein distance

between Fs( f ) and Ft ( f ) and can, therefore, reduce instabil-ity in training.

In addition, we expect P( f ; θ p) can determine the averageliquid flowrate according to f . Therefore, its loss function Lp

can be defined as follows:

Lp( ps, mls ; θe, θ p) = E( ps,mls )�S( p,ml )[

max(

(P(E( p; θe); θ p)− mls )2 − δ, 0

)]

(5)

where Lp is similar to the loss function of support vectorregression (SVR) [37]. We choose this form of loss functionmainly because mls is close but not exactly equal to the realmass flowrate ( mm

ls) of the liquid phase in the multiphase flow.

In fact, mls , here, is the flowrate of the liquid single-phaseflow before mixing, i.e., ms

ls. As we discussed in Section III,

there is an error between mmls

and msls

, but it is less than avery small upper bound. Therefore, we can replace mm

lswith

msls

. However, if the prediction ˆml was strictly required to beexactly the same as ms

lsin training, the prediction model would

be easy to overfit.Finally, we need to design the loss function for

E( p; θe). E( p; θe) is expected to distill domain-invariant andflowrate-discriminative features at the same time, and its lossfunction, therefore, should contain two different parts. First,E( p; θe) can improve the similarity between Fs( f ) and Ft ( f )by fooling D( f ; θd). However, the approach to training thegenerator G(·; θg) in WGANs is not appropriate to domainadaptation. In the case of image generation, the change of theparameters of G(·; θg) will not affect the data distribution ofreal images. Therefore, the real images are not used whentraining G(·; θg). θg is updated to maximize the discrim-inator’s scores for the generated pictures. In other words,the distribution of synthetic images is modified to get closeto the distribution of real images which is fixed. However,in the case of domain adaptation, θe has great impacts onboth Fs( f ) and Ft ( f ). Ft ( f ) will change as we adjust θe tomodify Fs( f ). Hence, both of Fs( f ) and Ft ( f ) need to beconsidered when E(·; θe) is optimized. Therefore, we defineL�

d as follows:

L�d ( ps, pt; θe, θd) =

{

E ps�S( p)

[

D(E( ps; θe); θd)]

−E pt�T ( p)

[

D(E( pt ; θe); θd))]}2

. (6)

The difference between E ps�S( p)

[

D(E( ps; θe); θd)]

andE pt �T ( p)

[

D(E( pt ; θe); θd))]

indicates the dissimilaritybetween Fs( f ) and Ft ( f ). Therefore, minimizing L�

d asa function of θe yields maximizing the similarity betweenFs( f ) and Ft ( f ).

Furthermore, the features produced by E( p; θe) shouldalso be flowrate discriminative, and this goal can simplybe achieved by minimizing Lp . Therefore, the completeloss function of E( p; θe) can be written as below, and the

Page 8: Flow Adversarial Networks: Flowrate Prediction for Gas ...yinyanliu.com/publications/Hu_Flow Adversarial Networks_2019.pdf · Flow Adversarial Networks: Flowrate Prediction for Gas–Liquid

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

8 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

Algorithm 1 FANs, We Recommend RMSprop for ParametersOptimizationRequire: The gradient penalty coefficient η, the domain coef-

ficient λ, the number of domain discriminator iterations perloop nd , the batch size m, RMSprop hyperparameters α,�d , �e and �p .

Require: initial feature extractor E(·; θe0), initial label pre-dictor P(·; θ p0), initial domain discriminator D(·; θd0).

1: while θ has not converged do2: for each i ∈ [1, nd ] do3: for each j ∈ [1, m] do4: Sample source data ps � S( p), target data pt �

T ( p), a random number β � U [0, 1].5: fs ← E( ps; θe)6: ft ← E( pt; θe)7: f ← β ft + (1− β) fs8: L�

d j← D( fs; θd)−D( ft ; θd)+η(�∇ f D( f ; θd)�2−

1)2 {Eq. (4)}9: end for

10: θd ←RMSprop(∇θd1m

∑mj=1 L�

d j, θd , �d , α)

11: end for12: Sample a batch of unlabeled target data

{

pt j

}mj=1

�T ( p), a batch of labeled source data

{

( ps j , mls j)}m

j=1�

S( p, ml).13:

{

fs j

}mj=1← {

E( ps j ; θe)}m

j=114:

{

ft j

}mj=1← {

E( pt j ; θe)}m

j=1

15: Lp ← 12m

∑mj=1

[

max(

(P( fs j ; θ p)− mls j)2 − δ, 0

)]

{Eq. (5)}

16: L�d ←

[1m

∑mj=1 D( fs j ; θd)− 1

m

∑mj=1 D( ft j ; θd)

]2

{Eq. (6)}17: Le ← Lp + λL�

d {Eq. (7)}18: θ p ←RMSprop

(∇θ pLp, θ p, �p, α)

19: θe ←RMSprop(∇θeLe, θe, �e, α

)

20: end while

pseudo-code2 of FANs is given in Algorithm 1

Le( ps, pt , mls ; θe, θ p, θd) = Lp( ps, mls ; θe, θ p)

+λL�d ( ps, pt ; θe, θd). (7)

C. Similarity and Difference Between FANs and DANs

The basic ideas of FANs and DANs are pretty similar. Bothof them try to find domain-invariant and label-discriminativefeatures via adversarial learning with a domain discriminatorD( f ; θd). However, there are two major differences. First,the loss function of D( f ; θd) in DANs is defined as

Ld( ps, pt ; θe, θd) = −E ps�S( p)

[

log(1− D(E( ps; θe); θd))]

−E pt �T ( p)

[

log D(E( pt ; θe); θd)]

. (8)

It is different from our method, and probably causes instabil-ity and difficulty in training [26]. More importantly, D( f ; θd)

2Our codes are available at https://github.com/hudelin19/idan_multiphaseflow

Fig. 6. Overviews of our proposed FANs and the original DANs. In both thetwo methods, the procedure to predict the flowrate for multiphase flows is thesame. The time series of multichannel pressure signals p first passes throughthe feature extractor E( p; θe), which contains multiple 1-D convolutionallayers and maxpooling layers, and then we can get the feature vector f .The domain discriminator D( f ; θd ) and the label predictor P( f ; θ p) candetermine the domain label and the liquid mass flowrate based on f . However,parameters of the prediction models are updated according to different lossfunctions. (a) In DANs, D( f ; θd ) is connected with E( p; θe) via a gradientreverse layer. E( p; θe) tries to capture domain-invariant features throughdirectly maximizing the loss of D( f ; θd ). (b) In FANs, we employ twodifferent loss functions L�

d and L�d for training D( f ; θd ) and training

E( p; θe), respectively. Instead of maximizing the probability that D( f ; θd )makes mistakes, we expect source and target features produced by E( p; θe)can obtain close outputs after passing D( f ; θd ).

is connected with the feature extractor E( p; θe) via a gradientreverse layer in the DAN architecture (see Fig. 6). Thislayer is used to provide the reverse gradient of the loss ofD( f ; θd), i.e., −∂Ld/∂θe, when training E( p; θe). In otherwords, E( p; θe) attempts to produce domain-invariant featuresby directly maximizing the loss of D( f ; θd). However, it isnot reasonable. D( f ; θd) will completely misclassify fs asclass 1 and ft as class 0, when Ld is taken to the maximum.This does not mean that Fs( f ) and Ft ( f ) have high similarity.In fact, D( f ; θd) only needs to add a minus sign to the output,then the accuracy of domain classification will return to 100%.The objective of DANs is shown as follows:

θ∗d = arg min

θdLd ( ps, pt ; θe, θd) (9)

(θ∗e , θ∗

p) = arg minθe,θ p

Lp( ps, mls ; θe, θ p)

−λLd ( ps, pt; θe, θd). (10)

In our method, the gradient reverse layer is removed. We donot require E( p; θe) to generate fs and ft that D( f ; θd)will completely classify into the incorrect categories. Instead,

Page 9: Flow Adversarial Networks: Flowrate Prediction for Gas ...yinyanliu.com/publications/Hu_Flow Adversarial Networks_2019.pdf · Flow Adversarial Networks: Flowrate Prediction for Gas–Liquid

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

HU et al.: FANs: FLOWRATE PREDICTION FOR GAS–LIQUID MULTIPHASE FLOWS ACROSS DIFFERENT DOMAINS 9

Fig. 7. Learning and testing curves of CNNs with different output objectives. (a)–(c) Learning and testing curves with ml as the output, Ml as the output,and vl as the output. The results are the average for three runs, and the shadow represents the standard deviation. The horizontal axis of each graph representsthe numbers of epochs, and the vertical axis represents the mean absolute relative error.

TABLE III

NETWORK SPECIFICATION

we expect fs and ft can yield similar and even the sameoutputs after passing D( f ; θd). The objective of FANs isshown as follows:

θ∗d = arg min

θdL�

d ( ps, pt; θe, θd) (11)

θ∗p = arg min

θ pLp( ps, mls ; θe, θ p) (12)

θ∗e = arg min

θeLp( ps, mls ; θe, θ p)

+λL�d ( ps, pt ; θe, θd). (13)

V. RESULTS AND DISCUSSION

On the basis of the dynamic experimental data collectedfrom the multiphase flow test facility discussed in Section III,we carried out a series of experiments to evaluate our method.The specification of networks we used in all experiments isshown in Table III.

CNN prediction models consist of E( p; θe) and P( f ; θ p).DANs and FANs contain all the three parts in the table.However, the output of D( f ; θd) in DANs needs to pass asigmoid function, because D( f ; θd) is regarded as a binaryclassifier in this case. In following experiments, we choseSELU as the activation function and initialized the parametersin accordance with [38]. The size of each convolutional filterin the table is 3, and the stride 1. The size of each max-poolis 2, and the stride is 2 as well.

A. CNNs With Different Output Objectives

First, we want to test that whether choosing different typesof flowrates as the output of the same CNN would affect theperformance. Therefore, we trained different CNN predictionmodels on water–air two-phase flow samples with differentkinds of flowrates as outputs, i.e., the mass flowrate ml ,the momentum flowrate Ml , and the volume flowrate vl .Then we tested each prediction model on oil–air two-phaseflow samples and oil-water–air three-phase flow samples. Thelearning and testing curves are shown in Fig. 7.

The results show that no matter what kind of flowrate isselected as the output, the performance of the CNN predictionmodel on training samples is quite close. The mean absoluterelative error for each CNN can eventually converge around5% on training samples and 9% on validation samples. How-ever, the performance on testing samples varies according tothe objective of the output. The mean absolute relative error onthe oil–air two-phase flow data set is 13% for the ml output,15% for the Ml output, and close to 20% for the vl output.The same metric on the oil–water–air three-phase flow dataset is 22% for the ml output, 25% for the Ml output, and evenmore than 28% for the vl output.

This phenomenon is related to the physical property ofthe differential pressure signals. However, to our surprise,it somewhat conflicts with the formula of traditional venturisingle-phase flowmeters, which is shown as follows [36]:

ml = �C√

1− β4πr2β2

2ρl�P1 (14)

where � is the expansion factor of the fluid, C is the dischargecoefficient of the venturi tube, r is the radius of the throat andβ is the ratio between diameters of the throat and pipe.

Page 10: Flow Adversarial Networks: Flowrate Prediction for Gas ...yinyanliu.com/publications/Hu_Flow Adversarial Networks_2019.pdf · Flow Adversarial Networks: Flowrate Prediction for Gas–Liquid

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

10 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

Fig. 8. Learning and testing curves of FANs, DANs, and CNNs in different transfer scenarios. (a) Performance of different algorithms in Sw → Tow ,(b) Sw → To, and (c) So → Tow . The results are the average for three runs, and the shadow represents the standard deviation. The horizontal axis of eachgraph represents the numbers of epochs, and the vertical axis represents the mean absolute relative error.

According to (14), �P1 directly reflects instantaneousmomentum flowrate Ml rather than instantaneous massflowrate ml . A similar conclusion can be drawn by dimensionalanalysis. However, the CNN with the mass flowrate outputhas the strongest generalization ability rather than the CNNwith the momentum flowrate output. This result is probablycontributed by two factors. First of all, the mechanism ofmultiphase flows is much more complicated than single-phaseflows. The conclusions based on single-phase flow theories arenot necessarily applicable to multiphase flows. In addition,CNNs mainly determine the average flowrate by detectingthe patterns consisting of time-adjacent multichannel pressuresignals. For the single-phase prediction formula, an instanta-neous differential pressure signal can deduce an instantaneousflowrate. The relationships among neighboring pressure sig-nals in the time domain are not considered. Therefore, the twomethods are essentially different.

From the testing curves of CNNs with different kinds offlowrates as output objectives, we observe that the differenceof the liquid phase densities has a negative impact on theperformance of the prediction model, and the issue can bemitigated to some extent by selecting the type of the flowrateto predict. The mass flowrate ml output can achieve thebest performance in comparison to the other two kinds offlowrates. Therefore, in the following experiments, all modelswere trained to predict the mass flowrate ml rather thanMl and vl .

B. Results of Different Domain Adaptation Tasks

Next, in order to evaluate our proposed domain adapta-tion method, we consider three different transfer scenarios,i.e., transferring from the water–air two-phase flow to theoil–water–air three-phase flow Sw → Tow, transferring fromthe water–air two-phase flow to the oil–air two-phase flowSw → To, and transferring from the oil–air two-phase flowto the oil–water–air three-phase flow So → Tow. The learningand testing curves are shown in Fig. 8. Note that we only takeinto account the oil–water–air three-phase flow samples whosemass flowrate is less than 4800 kg/h in So → Tow. It is mainlybecause regression models are not good at extrapolation.The maximum flowrate in training samples is approximately5200 kg/h, but is close to 6000 kg/h in testing samples. The

testing samples with large mass flowrate should be ignored;otherwise, extrapolation will take place in testing.

In Sw → Tow and So → Tow scenarios, CNNs have thesmallest relative error on source domain samples, at about5% in both two cases, but the largest relative error on tar-get domain samples, at around 22% and 20%, respectively.It is mainly because CNNs learn numerous domain-specificfeatures, which can help to improve the accuracy on sourcedomain samples but decrease the performance on targetdomain samples. On the contrary, FANs produce the most sig-nificant relative error on source domain samples, at around 7%in those two experiments, but the slightest relative error on tar-get domain samples, at around 10%. The gap between sourcedomain samples and target domain samples is only 3%, whichis much less than its counterpart in CNNs. This demonstratesthat the domain discriminator enables the feature extractorto learn domain-invariant features rather than domain-specificfeatures. DANs obtain an intermediate position on both sourceand target samples in terms of the relative error. They can alsocapture domain-invariant features, but FANs can achieve betterperformance.

In the Sw → To scenario, the learning and testing curvesof both three methods are quite close. The FAN and DANprediction models do not generate any remarkable improve-ment in this case compared to the CNN. The reason is thatthe data distributions of source samples and target samplesare very similar in this scenario. The testing error of theCNN on target domain samples is around 13%, which is only4% higher than the error on source validation samples (seeFig. 7). It means that the error can only drop by less than4% even if the data distributions of source and target samplesare exactly the same. In fact, the samples of the water–airtwo-phase flow and the samples of the oil–air two-phase flowcan be approximately considered as being generated by thesame distribution in our experiments. It is mainly becausethe experimental conditions were very close including thenumber of the medium in the liquid phase, the static pressure,the range of the gas mass flowrate, the liquid viscosity, and soon, when we carried out dynamic flow experiments to collectdata. In addition, even though the difference between theliquid densities may lead to a decline in prediction accuracyon testing samples, selecting the mass flowrate as the outputobjective of the prediction model can alleviate the issue to

Page 11: Flow Adversarial Networks: Flowrate Prediction for Gas ...yinyanliu.com/publications/Hu_Flow Adversarial Networks_2019.pdf · Flow Adversarial Networks: Flowrate Prediction for Gas–Liquid

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

HU et al.: FANs: FLOWRATE PREDICTION FOR GAS–LIQUID MULTIPHASE FLOWS ACROSS DIFFERENT DOMAINS 11

Fig. 9. Prediction results of FANs and CNNs on target domain samples. (a) Sw → Tow . (b) Sw → To. (c) So → Tow . The horizontal axis of each leftgraph in (a)–(c) represents the reference of the liquid mass flowrate ml , and the vertical axis represents the prediction of the liquid mass flowrate ˆml . Themeaning of the horizontal axis in each middle graph is the same as the left one, but the vertical axis is the absolute error between the prediction and thereference. Each right graph shows the relationship between the gas mass flowrate mg and the absolute error.

some extent. Therefore, the CNN is able to achieve satisfactoryprediction accuracy in this case, and domain adaptation seemsnot to be essential.

The prediction results of FANs and CNNs are shownin Fig. 9. In general, with domain adaptation, the predictionaccuracy rises, the prediction error band narrows and themodel becomes more reliable. However, it does not mean thatall predicted values are closer to the ground truth. For example,as illustrated in Fig. 9(a), the outputs of the FAN for severalsamples whose flowrate is near 300 kg/h are much worse thanthe estimations of the CNN. For further analysis, we plotFig. 10 to show the improvement after domain adaptation.Actually, the improvement can be interpreted as the decrement,from the distance between CNN prediction and reference tothe distance between FAN prediction and reference. If thedecrement is positive, it means that the absolute error of theFAN is less than that of the CNN. It should be regard as apositive improvement. If the decrement is negative, it meansthat the absolute error of the FAN is greater than that of theCNN. It should be regard as a negative improvement.

From Fig. 10, we observe that most of target domainsamples gain a positive improvement after domain adaptation.To be specific, more than 92% target domain samples canobtain better predicted values in Sw → Tow, and more than88% in So → Tow. Even in Sw → To, the number oftarget domain samples with a positive improvement exceeds ahalf. Furthermore, we also notice that positive improvementshave greater numerical values than negative improvements ingeneral. For instance, as illustrated in Fig. 10(a), most ofthe negative improvements are lower than 500 kg/h, whilenumerous positive improvements are higher than 500 kg/h andeven up to 1500 kg/h. Therefore, the accuracy of the predictionmodel sees a significant rise after domain adaptation.

Moreover, Fig. 9 also demonstrates that there is no sig-nificant reduction in prediction accuracy of our method withrespect to different dynamic experimental conditions. Forexample, the system static pressure in oil–water–air dynamicflow experiments is different from water–air and oil–air exper-iments. However, our prediction model is able to achievecomparable performance in both the three-phase flow data set

Page 12: Flow Adversarial Networks: Flowrate Prediction for Gas ...yinyanliu.com/publications/Hu_Flow Adversarial Networks_2019.pdf · Flow Adversarial Networks: Flowrate Prediction for Gas–Liquid

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

12 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

Fig. 10. Improvement of FANs on target domain samples compared to CNNs. The horizontal axis of each graph is the reference of the liquid mass flowrateml , and the vertical axis is actually the improvement after domain adaptation, which is defined as the decrement, from the distance (absolute error) betweenCNN prediction and reference to the distance (absolute error) between FAN prediction and reference. (a) Sw → Tow . (b) Sw → To. (c) So → Tow .

TABLE IV

REGRESSION METRICS

and the two-phase flow data set. Our model also performs wellin a large variety of flow patterns, GVFs, and water-in-liquidratios. Most of state-of-the-art techniques in the multiphaseflow measurement field cannot achieve this result.

Table IV evaluates different prediction models with threeregression metrics, including mean absolute percentage error(MAPE), mean absolute error (MAE), and root mean squareerror (RMSE). Our method outperforms DANs and CNNs inSw → Tow and So → Tow domain adaptation tasks withregard to all the three metrics. Even in Sw → To, our methodalso has a slight improvement compared with the CNN.

VI. CONCLUSION

In this paper, we have constructed feature maps of mul-tiphase flows with the time series of multi-channel pressuresignals, and successfully used them to train CNN modelsfor flowrate prediction. Moreover, the accuracy degradationcaused by the gap between testing and training samples that areprobably generated by different multiphase flows in differentoperating environments, and flow conditions are also taken intoaccount. We propose FANs on the basis of DANs to mitigatethis issue. Compared with the methods that have been reportedin the past, which can merely provide reliable predictionfor some specific flows (such as wet gas), our method canbe applied to a much wider range of flow conditions andoperating environments. The experimental results suggest ourprediction model can achieve promising testing accuracy even

the testing and training samples are different due to differentexperimental conditions during the data collection. In otherwords, the proposed method has strong generalization abilityand practical application potential.

In the near future, we plan to install our devices in areal industrial production environment to evaluate our methodon practical multiphase flow samples, which is much morechallenging. In addition, in order to further strengthen ourmodel, we also consider to design a novel deep architectureto integrate signals from multimodal sensors.

REFERENCES

[1] R. Thorn, G. A. Johansen, and E. A. Hammer, “Recent developmentsin three-phase flow measurement,” Meas. Sci. Technol., vol. 8, no. 7,pp. 691–701, 1997.

[2] Natural Gas–Wet Gas Flow Measurement in Natural Gas Operations,Standard ISO/TR 12748, 2015.

[3] R. Thorn, G. A. Johansen, and B. T. Hjertaker, “Three-phase flowmeasurement in the petroleum industry,” Meas. Sci. Technol., vol. 24,no. 1, 2013, Art. no. 012003.

[4] Z. H. Lin, “Two-phase flow measurements with sharp-edged orifices,”Int. J. Multiphase Flow, vol. 8, no. 6, pp. 683–693, 1982.

[5] Y. Xu, C. Yuan, Z. Long, Q. Zhang, Z. Li, and T. Zhang, “Research thewet gas flow measurement based on dual-throttle device,” Flow Meas.Instrum., vol. 34, pp. 68–75, Dec. 2013.

[6] C. Yuan, Y. Xu, T. Zhang, J. Li, and H. Wang, “Experimental inves-tigation of wet gas over reading in venturi,” Exp. Thermal Fluid Sci.,vol. 66, pp. 63–71, Sep. 2015.

[7] L. Xu, W. Zhou, X. Li, and M. Wang, “Wet-gas flow modeling for thestraight section of throat-extended venturi meter,” IEEE Trans. Instrum.Meas., vol. 60, no. 6, pp. 2080–2087, Jun. 2011.

[8] Measurement of Wet Gas Flow by Means of Pressure DifferentialDevices Inserted in Circular Cross-Section Conduits, Standard ISO/TR11583, 2012.

[9] S. Timung and T. K. Mandal, “Prediction of flow pattern of gas–liquidflow through circular microchannel using probabilistic neural network,”Appl. Soft Comput., vol. 13, no. 4, pp. 1674–1685, 2013.

[10] H. Shaban and S. Tavoularis, “Identification of flow regime in verticalupward air–water pipe flow using differential pressure signals and elasticmaps,” Int. J. Multiphase Flow, vol. 61, pp. 62–72, May 2014.

[11] H. Wu, F. Zhou, and Y. Wu, “Intelligent identification system of flowregime of oil–gas–water multiphase flow,” Int. J. Multiphase Flow,vol. 27, no. 3, pp. 459–475, 2001.

[12] Z. Sun and H. Zhang, “Neural networks approach for prediction ofgas–liquid two-phase flow pattern based on frequency domain analysisof vortex flowmeter signals,” Meas. Sci. Technol., vol. 19, no. 1, 2008,Art. no. 015401.

[13] H. Shaban and S. Tavoularis, “Measurement of gas and liquid flowrates in two-phase pipe flows by the application of machine learningtechniques to differential pressure signals,” Int. J. Multiphase Flow,vol. 67, pp. 106–117, Dec. 2014.

Page 13: Flow Adversarial Networks: Flowrate Prediction for Gas ...yinyanliu.com/publications/Hu_Flow Adversarial Networks_2019.pdf · Flow Adversarial Networks: Flowrate Prediction for Gas–Liquid

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

HU et al.: FANs: FLOWRATE PREDICTION FOR GAS–LIQUID MULTIPHASE FLOWS ACROSS DIFFERENT DOMAINS 13

[14] L. Xu, W. Zhou, X. Li, and S. Tang, “Wet gas metering using arevised venturi meter and soft-computing approximation techniques,”IEEE Trans. Instrum. Meas., vol. 60, no. 3, pp. 947–956, Mar. 2011.

[15] S. Fan and T. Yan, “Two-phase air–water slug flow measurement inhorizontal pipe using conductance probes and neural network,” IEEETrans. Instrum. Meas., vol. 63, no. 2, pp. 456–466, Feb. 2014.

[16] S. Azizi, E. Ahmadloo, and M. M. Awad, “Prediction of void fractionfor gas–liquid flow in horizontal, upward and downward inclined pipesusing artificial neural network,” Int. J. Multiphase Flow, vol. 87,pp. 35–44, Dec. 2016.

[17] S. Azizi, M. M. Awad, and E. Ahmadloo, “Prediction of water holdupin vertical and inclined oil–water two-phase flow using artificial neuralnetwork,” Int. J. Multiphase Flow, vol. 80, pp. 181–187, Apr. 2016.

[18] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521,pp. 436–444, May 2015.

[19] D. H. Hubel and T. N. Wiesel, “Receptive fields of single neurones inthe cat’s striate cortex,” J. Physicol., vol. 148, pp. 574–591, Oct. 1959.

[20] Y. Bengio, A. Courville, and P. Vincent, “Representationlearning: A review and new perspectives,” IEEE Trans. Pattern Anal.Mach. Intell., vol. 35, no. 8, pp. 1798–1828, Aug. 2013.

[21] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classificationwith deep convolutional neural networks,” in Proc. 25th Adv. Neural Inf.Process. Syst. (NIPS), 2012, pp. 1097–1105.

[22] K. Simonyan and A. Zisserman. (Sep. 2014). “Very deep con-volutional networks for large-scale image recognition.” [Online].Available: https://arxiv.org/abs/1409.1556

[23] C. Szegedy et al.. (Sep. 2014). “Going deeper with convolutions.”[Online]. Available: https://arxiv.org/abs/1409.4842

[24] K. He, X. Zhang, S. Ren, and J. Sun. (2016). “Identity map-pings in deep residual networks.” [Online]. Available: http://adsabs.harvard.edu/abs/2016arXiv160305027H

[25] I. Goodfellow et al., “Generative adversarial nets,” in Proc. Adv. NeuralInf. Process. Syst. 27, 2014, pp. 2672–2680.

[26] M. Arjovsky and L. Bottou. (2017). “Towards principled methods fortraining generative adversarial networks.” [Online]. Available: https://arxiv.org/abs/1701.04862

[27] M. Arjovsky, S. Chintala, and L. Bottou. (2017). “Wasserstein GAN.”[Online]. Available: https://arxiv.org/abs/1701.07875

[28] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville.(2017). “Improved training of wasserstein GANs.” [Online].Available: https://arxiv.org/abs/1704.00028

[29] A. Radford, L. Metz, and S. Chintala. (Nov. 2015). “Unsupervisedrepresentation learning with deep convolutional generative adversarialnetworks.” [Online]. Available: https://arxiv.org/abs/1511.06434

[30] D. Berthelot, T. Schumm, and L. Metz. (2017). “BEGAN:Boundary equilibrium generative adversarial networks.” [Online].Available: https://arxiv.org/abs/1703.10717

[31] G.-J. Qi. (2017). “Loss-sensitive generative adversarial networkson Lipschitz densities.” [Online]. Available: https://arxiv.org/abs/1701.06264

[32] S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Trans.Knowl. Data Eng., vol. 22, no. 10, pp. 1345–1359, Oct. 2010.

[33] Y. Ganin and V. Lempitsky. (2015). “Unsupervised domain adaptation bybackpropagation,” [Online]. Available: https://arxiv.org/abs/1409.7495

[34] Y. Ganin et al., “Domain-adversarial training of neural networks,”J. Mach. Learn. Res., vol. 17, no. 59, pp. 1–35, 2016.

[35] E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell, “Adversarial discrim-inative domain adaptation,” in Proc. CVPR, 2017, pp. 7167–7176.

[36] Measurement of Fluid Flow by Means of Pressure Differential DevicesInserted in Circular Cross-Section Conduits Running Full—Part 4:Venturi Tubes, Standard ISO 5167-4, 2003.

[37] A. J. Smola and B. Schölkopf, “A tutorial on support vector regression,”Statist. Comput., vol. 14, no. 3, pp. 199–222, Aug. 2004.

[38] G. Klambauer, T. Unterthiner, A. Mayr, and S. Hochreiter, “Self-normalizing neural networks,” in Proc. 30th Adv. Neural Inf. Process.Syst., 2017, pp. 971–980.

Delin Hu received the B.E. degree in automationfrom the College of Electrical Engineering and Infor-mation Technology, Sichuan University, Sichuan,China, in 2015. He is currently pursuing the M.E.degree in control science and engineering with theDepartment of Automation, Tsinghua University,Beijing, China.

His current research interests include machinelearning (ML), sensor data mining, and industrialprocess measurement and monitoring.

Jinku Li received the B.E. degree from the Schoolof Automation, Beijing Institute of Technology,Beijing, China, in 2015. He is currently pursuing thePh.D. degree with the Department of Automation,Tsinghua University, Beijing.

His current research interests include multiphaseflow, flow measurement and instrumentation, andmachine learning (ML) techniques.

Yinyan Liu received the B.E. degree from theDepartment of Automation, School of Controland Computer Engineering, North China ElectricPower University, Beijing, China, in 2011, and theM.E. degree from the Department of Automation,Tsinghua University, Beijing, in 2018.

Her current research interests include electricaltomography, intelligent sensor, and deep learning.

Yi Li received the B.E. degree from the Universityof Central Lancashire, Preston, U.K., in 2004, andthe Ph.D. degree from The University of Manchester,Manchester, U.K., in 2008.

He is currently an Associate Professor with theGraduate School at Shenzhen, Tsinghua University,Beijing, China. His current research interests includeelectrical tomography, data fusion, gas–liquid multi-phase flow measurement, and image reconstruction.