Abstract The first operation in an oil refinery is the atmospheric distillation. To maximize the extraction of some products like Gas Oil, a proper set of on line analysis instruments is required. These kinds of instruments are not always available, especially in medium and small size processing plants.
Due to the fact that color is a limiting specification, it constitutes a restriction for production optimization. Availability in real time of a good value estimate is what allows work to be carried out permanently in operative conditions where the process is most beneficial.
In this work a Neural Network (NN) approach to infer the color is proposed. A feed forward NN structure is used to identify the non-linear mapping from available process variables to that property.
To acquire representative I/O data, a set of dynamic experiments (move test) was developed in the plant. After that, a rigorous analysis to select the set of input variables was performed. In this study, process engineer's knowledge as well as some mathematical tools were used to evaluate a minimum set of inputs. From this analysis, the set of forty-three available inputs is reduced to the eight most sensitive with respect to the color representation.
Furthermore, rather than represent the entire transformation from the set of inputs variables to the output variable by a single neural network function, we analyze the possibility of breaking down the mapping into an initial pre-processing stage followed by a parameterized neural network model.
Inferences with Neural Networks Modelation techniques are based on three different focuses to describe the relation between inputs and outputs. These are mechanistic, linear regression and black box. Neural Networks are included in the latter. In this kind of technique, where we obtain models with undefined structure, the main idea is that a system composed by simple processing elements, connected in parallel, can learn the complex relations that exist between a large number of inputs and outputs. Once we have a trained net it can predict the output for any given input with great accuracy.
Neural Networks can be composed of several layers. The first one acts as a distribution node, transferring input data to all the neurons in the second layer. The last layer returns the output of the predicted variable. Between these first and last layers, we have the hidden layers, which are typically formed by only one layer. Each neuron in the input layer is connected to every neuron in the hidden layer and all these are connected to all the output neurons. All these connections are weighted and these weights are the determinant of the global function of the net. The basic structure of a NN is shown in figure 1.
Inside each neuron, we can see two different stages, the first one is the summatory of the inputs multiplied by the weights and the second is an activation function (generally sigmoidal) that affects this summatory to generate the output.