Coactive Neuro Fuzzy
Coactive Neuro Fuzzy
ANFIS I CANFIS
==F
+ w2c2)
61% 1 0 F-Y
Arroslsllon
+ w3)
2. CANFIS Architecture
1. Introduction
Adaptive iic~iiro-fuzqiiiodds clijoy both illally 2.1. Comparison with a simple back-prop NN
of the advantagos claiiii~tlfor Y S s a i d FS's linguis-
tic interpretability, which a1low.s prior k ~ i o w l d g e
to be c~nl~eddecl in tlivir coilst riiction iiiitl allows
the results of lrarliiiig to l>c possi1)ly iuidcrstoocl.
easy iniljleiiieiitatioii of suc4 i l ~ie~iro-fiizzp ltiodel
by modifying an avai1al)lc 1i;trt~l)onetl1,ack-prop
program. MJe contrast t lie iieiiro-fii
well-discussed black-l)os XNs. TVIIOS~~ vi-ciglit, coeffi-
cients are just, iieunieric cmiiirac.tioli st.rwgt,lis I)llt,
not, good laiiguage: tlie liiddm l a p r of tlie sim-
ple back-~jropNN is taiitiiiiioiiiit to tlie coiisecineiit
layer of CANFIS. P n t tiiig iiior(' liiddw iiotlcs in the
EN is equivalent to adding iiiorv rii1t.s in CXNFIS.
The NWs w e i g h h t w w i i tlir oiitput layer and t.lie
hidden layer correspond t o niciiibc~rsliipd u e s on
fuzzy association l a y r iii CASFIS. This compari-
son emphasizes the iiisidc transpiireiicj- of C-ANFIS.
From an archit,cct.nrid poiiit of T-iw. CXYFIS's
powerful capability stciiis froiii ~~atterii-tlel~eiideiit,
weights between the constqueiit layer and the fuzzy
association layer. Nanielj-. iiienihership values cor-
respond to those tlyiianiicdly changmhlc w e i g h
that depend on input, pat,teriis. In cont,rast.. t-he
simple back-prop EN tries to find one specific set
of weights coiiiiiioii to all t.raiiiiiig pat terns: in other
words, the weights are iisc~lin i~ global fitshion. One
-
typical criticisiii of t,lie siiiiple hack-prop ?T%sis in
globally updating weight coefficients. On t,he ot.her
hand, CANFIS and a Radial Basis Function Yet-
work (RBFN) are locally tuned [13. 191. With bell-
shaped fuzzy membership fiiiictions (XIFs). CAN-
FIS has a niechanisiii whereby it. caii pfoduce a
center-weighted response' to sinal1 recept.ire fields,
localizing t,he primary iiiput escit,ation. In this
sense. CANFIS can he functionally equivalent, t o an
RBFN [6]. Just, as aii RBFN enjoys it.s quick con-
vergence capability, ANFIS also can evolve t.o recog-
nize some feature in a training tlat,aset. w r y quickly
compared with siiiipk back-propagation KNs.
Figure 2 (a1)ovc~)visuali/;tts thi\ cwicept. In short. (f) CANFIS with 3 linear rules ( R ) CANFiS with 3 linear rules
. rulcs are coiistriirtcd with slinrcd iiiciii1)t.r- I ,
Klih .
(6)Final Outputs 8
iI r U",. ,
:\p/
I
/ '
MI? ,
4. Analysis of results
4.1. Limitations of Interpretability
W ~ W I <three rulcs aic iiitrot1iicNI. it i5 oliseivect
4.2. Evolution of antecedents (MFs)
that two MFs (MF2, H F 3 ) tiansit h t l i niid forth
dt the beginning of their cvdiition if struggling to Aiiot1ic.r iiitcw\tiiig 1)oiiit in tlir rrsult i\ tliitt UF\*
find coiiifortahle nit lies ( ~ e Figure
c -1 ( k f t ) ) . After octxpl-tncitis tlcpciid on traiiiing inctliotlh. Tliv
origilial ANFIS enjoys i t s 1iyl)rkl lwriiiiig proce-
durc haset1 oil h t l i a licwristic, a&ptivc s’tep size
strategy and a coni1)inatioii of tlir. gridieiit de-
ceiit (CiD) nictliod iilitl the. lCit\t-h(llii\rChvstiniation
(LSE) tvlicreby. iii tlic for\47iLrtl pass, coiiseyueiit
paraliicters ai(’ updated 1)i~s.sccloii LSE iising lieu-
r o ~ ioutputs, a ~ i t in
l the I)R(.li\viLrd1 ) i i s ~iLlitecedelit
.
paraiiieters are updated l)a-se(l oii tlir GD method
using error sigmls.
111this simulation. wli(w origiiial hell N F s mere
iiitroducrd, CANFIS wit11 thc GD iiiethod alone
unespectedlp rorivcrgctl fastw tliaii CXSFIS with
t hc>liybrid learning procedurc~. IIoreovcr. CANFIS
based oii t lie Iiyl>ridlcariiiiig proccdrirc> did not fit
the N-shape very woll as Figures 6 (\-),(TI‘) show,
while CANFIS with tlic GD iiietliotl aloiie recog-
nized the features of the ?rT-sliq>ewell as showii in
Figure 4 (right). In mails mws. tlic hybrid learning
algorithm works better t l i m tlic GD inctliod alone,
but it may be worth cmisitlcring possi1)le reasons
for this observation. The 1iyl)ritllearniiig procedure
lmsically predominates whtw intuiti~cl~-positioiied
MFs do iiot iieecl to cvolv(. w r y iiinch. Initially,
LSE inay specialize coiiscqucnts to a great extent,
which niay preveiit A l F s from erolr-ing. LSE can
find certain d u e s of (miisecpwts that have min-
inial errors with tlie currelit h l F setup, Imt after
updatiiig coefficients, it iiiay c>iitl u p losing its way
to a global iiiininiuni.
Without a prior kiioml(~lgc,initial setups of
MFs’ parameters (ant ccwlcnts) aiitl consequents
inay iiot be perfect; to trust the initial guess may
turn out to be a possilAt1 ol)stac.le iii obtaining bet-
ter results. Note that the rc.snltant MFs in Fig-
ures 3 (Q),(R) inay not miitch ally initial gne
Let us now asunw tliiit IW arquirc two coiise-
queiits froin data sets. 111 this cas(’. is i t a good
idea t o stick to thein’! Figirrc. 5 4ion.b two re-
siilts obtained iiriiig two fixed conscqucn ts. which
purposely coiiicidc witli two side lines of tlie N- 4.4. Evolution of Consequents (Rules)
shape; Figure 5 ( A ) , rcsiiltiiig froni (a) suggests
that clinging to thow two coiistqneiits niay not he Tlierc. iiiiist I x soiiic’ optiiiial coiiibiiiations of
urefiil in ohtaining a good fit. On tlita other hand, slia1)eb of IIFs and foriiis of conscquents. Figure T
Figure 5 (B) suggests that AXFIS is al)k to adapt shon.5 S0111(’ of tllrlll.
to the N-shape to soiiie c.stc.iit ( ~ c wli(~ii n the ini- Iiiterrstiiigly3 oittputs of tlw atlaptcd (~)iis(i-
tial MFs are poorly set up as iii (1)) wlierc~tlicre is quciits tlrpic.tetl in Figure 7 (A)-(D) cwl up Iwiiig
no intersection betwwli tlie two iiiotlificd bell LIFs. diff(wnt iLli(1 f;lr froni tlir dcsired N-sliapfl. This is
As in (B). the left-hand 1)ase of JIF1 rcacliecl the because IIY.liar-cl triiiiicd hoth aiiteccdciit arid colis(’-
pointed coriier of the N-slia1x’. aiid therefore Jf Fl qimit parts siiiiiiltaiieously. Thus,t w d i outpiit cloc.5
evolved to a shape differc~ntfrom tlw M F l as in iiot haw to fit tlic tlcsircd N-shape: tlit. coiiil,iiic~tl
Figure 3(Q). outpnts fit the N-shiLpr~.
These results confirni that C‘AKIFS can grasp
the peculiarity of a givcii (lata set becmse of its 5 . Conclusion
adaptive capability.
References
[I] "Modular Networks." llayliiii, S i l n o r i Ncliral N e t i v o r l i s :
a comprehensive fountlat ioii rlinl)tc~r 12.. klarmillan